为什么分区表的leader只聚集到两个zone上呢?

ob:4.2.1.8-ce

mysql> select * from oceanbase.dba_ob_zones;
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| ZONE  | CREATE_TIME                | MODIFY_TIME                | STATUS | IDC  | REGION   | TYPE      |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| zone1 | 2025-08-01 10:36:50.097351 | 2025-08-08 18:28:31.286667 | ACTIVE | idc1 | HANGZHOU | ReadWrite |
| zone2 | 2025-08-01 10:52:07.041147 | 2025-08-08 18:28:31.289832 | ACTIVE | idc2 | HANGZHOU | ReadWrite |
| zone3 | 2025-08-01 10:52:10.070665 | 2025-08-08 18:28:31.291942 | ACTIVE | idc3 | SHANGHAI | ReadWrite |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
3 rows in set (0.04 sec)

mysql> SELECT SVR_IP,SVR_PORT,ZONE,status,START_SERVICE_TIME,STOP_TIME,WITH_ROOTSERVER FROM oceanbase.DBA_OB_servers;
+--------------+----------+-------+--------+----------------------------+-----------+-----------------+
| SVR_IP       | SVR_PORT | ZONE  | status | START_SERVICE_TIME         | STOP_TIME | WITH_ROOTSERVER |
+--------------+----------+-------+--------+----------------------------+-----------+-----------------+
| 10.xx.xx.30  |     2882 | zone1 | ACTIVE | 2025-08-01 10:40:13.834674 | NULL      | YES             |
| 10.xx.xx.31  |     2882 | zone3 | ACTIVE | 2025-08-01 10:57:06.405188 | NULL      | NO              |
| 10.xx.xx.202 |     2882 | zone2 | ACTIVE | 2025-08-01 14:47:28.505417 | NULL      | NO              |
+--------------+----------+-------+--------+----------------------------+-----------+-----------------+
3 rows in set (0.00 sec)
mysql> select * from oceanbase.dba_ob_tablegroups;
+-----------------+-------------------+----------------------+-----------------+------------------------+------------------------+---------------------------+----------+
| TABLEGROUP_NAME | PARTITIONING_TYPE | SUBPARTITIONING_TYPE | PARTITION_COUNT | DEF_SUBPARTITION_COUNT | PARTITIONING_KEY_COUNT | SUBPARTITIONING_KEY_COUNT | SHARDING |
+-----------------+-------------------+----------------------+-----------------+------------------------+------------------------+---------------------------+----------+
| oceanbase       | NONE              | NONE                 |            NULL |                   NULL |                   NULL |                      NULL | ADAPTIVE |
+-----------------+-------------------+----------------------+-----------------+------------------------+------------------------+---------------------------+----------+
1 row in set (0.01 sec)
mysql> show create table t \G
*************************** 1. row ***************************
       Table: t
Create Table: CREATE TABLE `t` (
  `C1` int(11) DEFAULT NULL,
  `C2` int(11) DEFAULT NULL,
  KEY `INDEX1` (`C2`) BLOCK_SIZE 16384 GLOBAL
) DEFAULT CHARSET = utf8mb4 ROW_FORMAT = DYNAMIC COMPRESSION = 'zstd_1.3.8' REPLICA_NUM = 3 BLOCK_SIZE = 16384 USE_BLOOM_FILTER = FALSE TABLET_SIZE = 134217728 PCTFREE = 0
 partition by hash(c1)
(partition `p0`,
partition `p1`,
partition `p2`,
partition `p3`,
partition `p4`)
1 row in set (0.01 sec)

租户信息:

mysql> select * from oceanbase.dba_ob_tenants \G
*************************** 1. row ***************************
                 TENANT_ID: 1010
               TENANT_NAME: test
               TENANT_TYPE: USER
               CREATE_TIME: 2025-08-08 14:51:20.009389
               MODIFY_TIME: 2025-08-08 14:51:44.445655
              PRIMARY_ZONE: zone1,zone2,zone3
                  LOCALITY: FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3
         PREVIOUS_LOCALITY: NULL
        COMPATIBILITY_MODE: MYSQL
                    STATUS: NORMAL
             IN_RECYCLEBIN: NO
                    LOCKED: NO
               TENANT_ROLE: PRIMARY
         SWITCHOVER_STATUS: NORMAL
          SWITCHOVER_EPOCH: 0
                  SYNC_SCN: 1754708039700464060
            REPLAYABLE_SCN: 1754708039700464060
              READABLE_SCN: 1754708039700464060
        RECOVERY_UNTIL_SCN: 4611686018427387903
                  LOG_MODE: NOARCHIVELOG
ARBITRATION_SERVICE_STATUS: DISABLED
                  UNIT_NUM: 1
                COMPATIBLE: 4.2.1.8
                 MAX_LS_ID: 1003
1 row in set (0.05 sec)
mysql> select TENANT_ID,DATABASE_NAME,TABLE_NAME,PARTITION_NAME,TABLE_ID,TABLE_TYPE,LS_ID,ZONE,SVR_IP,role from oceanbase.CDB_OB_TABLE_LOCATIONS where  table_name='t' and role='leader';
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+--------+
| TENANT_ID | DATABASE_NAME | TABLE_NAME | PARTITION_NAME | TABLE_ID | TABLE_TYPE | LS_ID | ZONE  | SVR_IP       | role   |
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+--------+
|      1010 | test          | t          | p0             |   500002 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER |
|      1010 | test          | t          | p1             |   500002 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER |
|      1010 | test          | t          | p2             |   500002 | USER TABLE |  1003 | zone1 | 10.xx.xx.30  | LEADER |
|      1010 | test          | t          | p3             |   500002 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER |
|      1010 | test          | t          | p4             |   500002 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER |
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+--------+

分区表的leader只分布在zone1和zone2上,为什么不是均匀分布呢?

4 个赞

@论坛小助手

2 个赞

分区表的Leader分布情况受到多个因素的影响,从提供的信息来看,尽管分区表的分区数是5个,但Leader副本仅分布在zone1和zone2上,而不是均匀分布在所有可用的Zone上,可能的原因有:

  1. Primary Zone 设置:租户的 PRIMARY_ZONE 设置为 zone1,zone2,zone3,这意味着Leader副本会优先分布在这些Zone上,但如果某些Zone上的资源负载较高,OceanBase 数据库可能会选择资源较为空闲的Zone来放置Leader副本。可以从 oceanbase.DBA_OB_SERVERS 视图查看各Zone的资源使用情况。
  2. 资源单元的均衡:OceanBase 数据库通过 enable_rebalanceserver_balance_cpu_mem_tolerance_percent 配置项控制资源单元的均衡。如果某些Zone的资源使用率较低,Leader副本可能会被调度到这些Zone上。可以检查 oceanbase.CDB_OB_TABLE_LOCATIONS 视图中的 LS_IDZONE 字段,查看各Zone的负载情况。
  3. LS 均衡:在OceanBase 4.2.x版本中,LS(Log Stream)的数量和Leader的均衡是优先级较高的均衡策略。可以检查 oceanbase.DBA_OB_LOG_STREAMS 视图,查看各Zone的LS分布情况。
  4. 手动干预:可能存在手动干预的情况,例如通过 ALTER TABLE ... REBUILD PARTITION 命令重新构建分区,导致Leader副本的分布发生变化。
    建议检查上述几个方面,以便找出具体原因。

详情请参考:

  1. 流量分布
  2. 资源单元的均衡
  3. 租户内均衡
  4. 查看租户和资源信息
3 个赞

只因为设置了这个吗?

mysql> select * from oceanbase.dba_ob_zones;
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| ZONE  | CREATE_TIME                | MODIFY_TIME                | STATUS | IDC  | REGION   | TYPE      |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| zone1 | 2025-08-01 10:36:50.097351 | 2025-08-08 18:28:31.286667 | ACTIVE | idc1 | HANGZHOU | ReadWrite |
| zone2 | 2025-08-01 10:52:07.041147 | 2025-08-08 18:28:31.289832 | ACTIVE | idc2 | HANGZHOU | ReadWrite |
| zone3 | 2025-08-01 10:52:10.070665 | 2025-08-08 18:28:31.291942 | ACTIVE | idc3 | SHANGHAI | ReadWrite |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
3 rows in set (0.04 sec)

只因为设置了这个吗? zone1 zone2在同一个region呢?

2 个赞

是呀,好奇怪,也不知道什么原因

2 个赞

学习学习

2 个赞

学习一下。确实,根据结果判断原因的话,往往原因都有可能是多个。

2 个赞

谢谢解读,学习了。

2 个赞

sys租户查看一下 下面的信息
select * from GV$OB_PARAMETERS where tenant_id =1002 and name in (“enable_rebalance”, “balancer_idle_time”, “partition_balance_schedule_interval”);
select * from CDB_OB_BALANCE_JOB_HISTORY where tenant_id= 1010;
select * from CDB_OB_TRANSFER_TASK_HISTORY where tenant_id = 1010;

select svr_ip,svr_port,ls_id,B.status,B.primary_zone,B.unit_group_id from DBA_OB_UNITS A join CDB_OB_LS B on A.unit_group_id = B.unit_group_id where B.tenant_id = 1010 and B.ls_id in (1001,1002,1003);

1 个赞
mysql> select * from CDB_OB_TRANSFER_TASK_HISTORY where tenant_id = 1010 \G
*************************** 1. row ***************************
              TENANT_ID: 1010
                TASK_ID: 1
            CREATE_TIME: 2025-08-08 18:51:55.530451
            FINISH_TIME: 2025-08-08 18:51:56.632429
                 SRC_LS: 1001
                DEST_LS: 1003
              PART_LIST: 500009:500009
             PART_COUNT: 1
    NOT_EXIST_PART_LIST: NULL
LOCK_CONFLICT_PART_LIST: NULL
 TABLE_LOCK_TABLET_LIST: NULL
            TABLET_LIST: 1152921504606846978:0
           TABLET_COUNT: 1
              START_SCN: 1754650315717296719
             FINISH_SCN: 1754650316217878816
                 STATUS: COMPLETED
               TRACE_ID: YB420A10121E-00063B44F48C2B2B-0-0
                 RESULT: 0
        BALANCE_TASK_ID: 126843
    TABLE_LOCK_OWNER_ID: 126853
                COMMENT: 
1 row in set (0.00 sec)
mysql> select * from CDB_OB_BALANCE_JOB_HISTORY where tenant_id= 1010 \G
*************************** 1. row ***************************
              TENANT_ID: 1010
                 JOB_ID: 126842
            CREATE_TIME: 2025-08-08 18:51:53.889034
            FINISH_TIME: 2025-08-08 18:52:03.900577
       BALANCE_STRATEGY: partition balance
               JOB_TYPE: PARTITION_BALANCE
        TARGET_UNIT_NUM: 1
TARGET_PRIMARY_ZONE_NUM: 3
                 STATUS: COMPLETED
                COMMENT: NULL
1 row in set (0.00 sec)

mysql> select * from GV$OB_PARAMETERS where tenant_id =1010 and name in ('enable_rebalance', 'balancer_idle_time', 'partition_balance_schedule_interval')  and svr_ip='10.16.18.30'\G
*************************** 1. row ***************************
    SVR_IP: 10.xx.xx.30
  SVR_PORT: 2882
      ZONE: zone1
     SCOPE: TENANT
 TENANT_ID: 1010
      NAME: partition_balance_schedule_interval
 DATA_TYPE: NULL
     VALUE: 2h
      INFO: the time interval between generate partition balance task. The value should be no less than balancer_idle_time to enable partition balance. Default value 2h and the value 0s means disable partition balance. Range: [0s, +∞)
   SECTION: LOAD_BALANCE
EDIT_LEVEL: DYNAMIC_EFFECTIVE
*************************** 2. row ***************************
    SVR_IP: 10.xx.xx.30
  SVR_PORT: 2882
      ZONE: zone1
     SCOPE: TENANT
 TENANT_ID: 1010
      NAME: balancer_idle_time
 DATA_TYPE: NULL
     VALUE: 10s
      INFO: the time interval between the schedules of the tenant load-balancing task. Range: [10s, +∞)
   SECTION: LOAD_BALANCE
EDIT_LEVEL: DYNAMIC_EFFECTIVE
*************************** 3. row ***************************
    SVR_IP: 10.xx.xx.30
  SVR_PORT: 2882
      ZONE: zone1
     SCOPE: TENANT
 TENANT_ID: 1010
      NAME: enable_rebalance
 DATA_TYPE: NULL
     VALUE: True
      INFO: specifies whether the tenant load-balancing is turned on. Value:  True:turned on  False: turned off
   SECTION: LOAD_BALANCE
EDIT_LEVEL: DYNAMIC_EFFECTIVE
3 rows in set (0.01 sec)

image

1 个赞

@淇铭 老师

麻烦查下 zone3是否有leader分区。
SELECT count(*) from DBA_OB_TABLE_LOCATIONS where zone=‘zone3’ and tenant_id= 1010 and role=‘leader’;

你按照楼上发的查一下信息 看看zone3表分区情况

mysql> select tenant_id,DATABASE_NAME,TABLE_NAME,PARTITION_NAME,TABLE_ID,TABLE_TYPE,LS_ID,ZONE,SVR_IP,role from oceanbase.CDB_OB_TABLE_LOCATIONS where   role='leader' and zone='zone3';
+-----------+---------------+------------------+----------------+----------+------------+-------+-------+-------------+--------+
| tenant_id | DATABASE_NAME | TABLE_NAME       | PARTITION_NAME | TABLE_ID | TABLE_TYPE | LS_ID | ZONE  | SVR_IP      | role   |
+-----------+---------------+------------------+----------------+----------+------------+-------+-------+-------------+--------+
|      1006 | test          | sbtest4          | NULL           |   500024 | USER TABLE |  1003 | zone3 | 10.16.18.31 | LEADER |
|      1006 | test          | __idx_500024_k_4 | NULL           |   500027 | INDEX      |  1003 | zone3 | 10.xx.xx.31 | LEADER |
+-----------+---------------+------------------+----------------+----------+------------+-------+-------+-------------+--------+
2 rows in set (0.48 sec)
mysql> select tenant_id,DATABASE_NAME,TABLE_NAME,PARTITION_NAME,TABLE_ID,TABLE_TYPE,LS_ID,ZONE,SVR_IP,role from oceanbase.CDB_OB_TABLE_LOCATIONS where table_type='user table';
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+----------+
| tenant_id | DATABASE_NAME | TABLE_NAME | PARTITION_NAME | TABLE_ID | TABLE_TYPE | LS_ID | ZONE  | SVR_IP       | role     |
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+----------+
|         1 | ocp           | ops_config | NULL           |   500004 | USER TABLE |     1 | zone1 | 10.xx.xx.30  | LEADER   |
|         1 | ocp           | ops_config | NULL           |   500004 | USER TABLE |     1 | zone2 | 10.xx.xx.202 | FOLLOWER |
|         1 | ocp           | ops_config | NULL           |   500004 | USER TABLE |     1 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1006 | test          | sbtest2    | NULL           |   500022 | USER TABLE |  1002 | zone1 | 10.xx.xx.30  | FOLLOWER |
|      1006 | test          | sbtest2    | NULL           |   500022 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER   |
|      1006 | test          | sbtest2    | NULL           |   500022 | USER TABLE |  1002 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1006 | test          | sbtest3    | NULL           |   500023 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER   |
|      1006 | test          | sbtest3    | NULL           |   500023 | USER TABLE |  1001 | zone2 | 10.xx.xx.202 | FOLLOWER |
|      1006 | test          | sbtest3    | NULL           |   500023 | USER TABLE |  1001 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1006 | test          | sbtest4    | NULL           |   500024 | USER TABLE |  1003 | zone1 | 10.xx.xx.30  | FOLLOWER |
|      1006 | test          | sbtest4    | NULL           |   500024 | USER TABLE |  1003 | zone2 | 10.xx.xx.202 | FOLLOWER |
|      1006 | test          | sbtest4    | NULL           |   500024 | USER TABLE |  1003 | zone3 | 10.xx.xx.31  | LEADER   |
|      1006 | test          | sbtest5    | NULL           |   500025 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER   |
|      1006 | test          | sbtest5    | NULL           |   500025 | USER TABLE |  1001 | zone2 | 10.xx.xx.202 | FOLLOWER |
|      1006 | test          | sbtest5    | NULL           |   500025 | USER TABLE |  1001 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1006 | test          | sbtest1    | NULL           |   500026 | USER TABLE |  1002 | zone1 | 10.xx.xx.30  | FOLLOWER |
|      1006 | test          | sbtest1    | NULL           |   500026 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER   |
|      1006 | test          | sbtest1    | NULL           |   500026 | USER TABLE |  1002 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1010 | test          | t          | p0             |   500002 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER   |
|      1010 | test          | t          | p0             |   500002 | USER TABLE |  1001 | zone2 | 10.xx.xx.202 | FOLLOWER |
|      1010 | test          | t          | p0             |   500002 | USER TABLE |  1001 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1010 | test          | t          | p1             |   500002 | USER TABLE |  1002 | zone1 | 10.xx.xx.30  | FOLLOWER |
|      1010 | test          | t          | p1             |   500002 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER   |
|      1010 | test          | t          | p1             |   500002 | USER TABLE |  1002 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1010 | test          | t          | p2             |   500002 | USER TABLE |  1003 | zone1 | 10.xx.xx.30  | LEADER   |
|      1010 | test          | t          | p2             |   500002 | USER TABLE |  1003 | zone2 | 10.xx.xx.202 | FOLLOWER |
|      1010 | test          | t          | p2             |   500002 | USER TABLE |  1003 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1010 | test          | t          | p3             |   500002 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER   |
|      1010 | test          | t          | p3             |   500002 | USER TABLE |  1001 | zone2 | 10.xx.xx.202 | FOLLOWER |
|      1010 | test          | t          | p3             |   500002 | USER TABLE |  1001 | zone3 | 10.xx.xx.31  | FOLLOWER |
|      1010 | test          | t          | p4             |   500002 | USER TABLE |  1002 | zone1 | 10.xx.xx.30  | FOLLOWER |
|      1010 | test          | t          | p4             |   500002 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER   |
|      1010 | test          | t          | p4             |   500002 | USER TABLE |  1002 | zone3 | 10.xx.xx.31  | FOLLOWER |
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+----------+

mysql> select tenant_id,DATABASE_NAME,TABLE_NAME,PARTITION_NAME,TABLE_ID,TABLE_TYPE,LS_ID,ZONE,SVR_IP,role from oceanbase.CDB_OB_TABLE_LOCATIONS where table_type='user table' and role='leader';
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+--------+
| tenant_id | DATABASE_NAME | TABLE_NAME | PARTITION_NAME | TABLE_ID | TABLE_TYPE | LS_ID | ZONE  | SVR_IP       | role   |
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+--------+
|         1 | ocp           | ops_config | NULL           |   500004 | USER TABLE |     1 | zone1 | 10.xx.xx.30  | LEADER |
|      1006 | test          | sbtest2    | NULL           |   500022 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER |
|      1006 | test          | sbtest3    | NULL           |   500023 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER |
|      1006 | test          | sbtest4    | NULL           |   500024 | USER TABLE |  1003 | zone3 | 10.xx.xx.31  | LEADER |
|      1006 | test          | sbtest5    | NULL           |   500025 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER |
|      1006 | test          | sbtest1    | NULL           |   500026 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER |
|      1010 | test          | t          | p0             |   500002 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER |
|      1010 | test          | t          | p1             |   500002 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER |
|      1010 | test          | t          | p2             |   500002 | USER TABLE |  1003 | zone1 | 10.xx.xx.30  | LEADER |
|      1010 | test          | t          | p3             |   500002 | USER TABLE |  1001 | zone1 | 10.xx.xx.30  | LEADER |
|      1010 | test          | t          | p4             |   500002 | USER TABLE |  1002 | zone2 | 10.xx.xx.202 | LEADER |
+-----------+---------------+------------+----------------+----------+------------+-------+-------+--------------+--------+
11 rows in set (0.31 sec)

@辞霜 @淇铭

同一个优先级的zone不包含多个region,跨区域同优先级,延迟问题无法避免,时差太大,集群会频繁切换。表分区存在于zone1 zone2是合理的,负载均衡内部出于地域考虑进行分配的

那为什么这里还有个zone3上的leader呢 理论上leader都是分布在zone1和zone2上,因为这个两个zone在同一个region。

并且只有table的 partition leader在 zone3上

mysql> select * from oceanbase.dba_ob_zones;
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| ZONE  | CREATE_TIME                | MODIFY_TIME                | STATUS | IDC  | REGION   | TYPE      |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| zone1 | 2025-08-01 10:36:50.097351 | 2025-08-08 18:28:31.286667 | ACTIVE | idc1 | HANGZHOU | ReadWrite |
| zone2 | 2025-08-01 10:52:07.041147 | 2025-08-08 18:28:31.289832 | ACTIVE | idc2 | HANGZHOU | ReadWrite |
| zone3 | 2025-08-01 10:52:10.070665 | 2025-08-08 18:28:31.291942 | ACTIVE | idc3 | SHANGHAI | ReadWrite |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
3 rows in set (0.04 sec)

image
image
两个租户的primary_zone不同,mysql租户是RANDOM,test租户是zone1,zone2,zone3

修改mysql租户:
修改前查看zone信息:

mysql> select * from oceanbase.dba_ob_zones;
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| ZONE  | CREATE_TIME                | MODIFY_TIME                | STATUS | IDC  | REGION   | TYPE      |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| zone1 | 2025-08-01 10:36:50.097351 | 2025-08-08 18:28:31.286667 | ACTIVE | idc1 | HANGZHOU | ReadWrite |
| zone2 | 2025-08-01 10:52:07.041147 | 2025-08-08 18:28:31.289832 | ACTIVE | idc2 | HANGZHOU | ReadWrite |
| zone3 | 2025-08-01 10:52:10.070665 | 2025-08-08 18:28:31.291942 | ACTIVE | idc3 | SHANGHAI | ReadWrite |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
3 rows in set (0.02 sec)
mysql> alter tenant mysql PRIMARY_ZONE='zone1,zone2,zone3';
ERROR 1235 (0A000): tenant primary zone span regions not supported
mysql> 

mysql> alter system modify zone zone3 region='HANGZHOU';
Query OK, 0 rows affected (0.01 sec)

修改后查看zone信息:

mysql> select * from oceanbase.dba_ob_zones;
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| ZONE  | CREATE_TIME                | MODIFY_TIME                | STATUS | IDC  | REGION   | TYPE      |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
| zone1 | 2025-08-01 10:36:50.097351 | 2025-08-08 18:28:31.286667 | ACTIVE | idc1 | HANGZHOU | ReadWrite |
| zone2 | 2025-08-01 10:52:07.041147 | 2025-08-08 18:28:31.289832 | ACTIVE | idc2 | HANGZHOU | ReadWrite |
| zone3 | 2025-08-01 10:52:10.070665 | 2025-08-11 15:51:18.606212 | ACTIVE | idc3 | HANGZHOU | ReadWrite |
+-------+----------------------------+----------------------------+--------+------+----------+-----------+
3 rows in set (0.02 sec)

修改mysql租户的primary_zone信息:


mysql> alter tenant mysql PRIMARY_ZONE='zone1,zone2,zone3';
Query OK, 0 rows affected (0.16 sec)

修改zone3的region信息:

再次查看zone3上的leader信息:

结果为空白

而之前是有leader分布的,如下:

问题:
1、primary_zone不能跨region分布,修改了zone的region属性信息,且为不同的region信息,leader会分布在region居多的zone上;
2、primary_zone=RANDOM和primary_zone=zone1,zone2,zone3 表面效果看似一样,实际上有区别,本案例就是;
3、两地三中心 不是应该修改primary_zone来规范leader的分布吗?
4、为什么sbtest4 的leader 还是分布在zone3上,跟预期不符合,不知是bug还是什么?

楼上解释的primary_zone不能跨region分布这个有问题 可以跨region 不过需要注意的是
同一个优先级的zone不包含多个region,不然可能会有问题,这个主要是多地域的网络延迟,如果选择的是跨区域同优先级,延迟问题无法避免,时差太大,集群会频繁切换。