ob sysbench压测

os:Ubuntu 2204
ob:4.2.0 ,三节点集群

压测:

# sysbench /usr/local/share/sysbench/oltp_read_write.lua --mysql-host=10.xxxx--mysql-port=2883 --mysql-user=xxxx --mysql-password=xxx --mysql-db=sbtest   --tables=20 --table-size=1500000 --time=300 --report-interval=10 --threads=32  --db-ps-mode=disable run 

压测5轮后,竟然把observer所在主机的空间打满了

$ tree -L 2
.
├── data
│   └── obdemo
└── redo
    └── obdemo

4 directories, 0 files

$ tree -L 3
.
├── data
│   └── obdemo
│       ├── etc3
│       └── sstable
└── redo
    └── obdemo
        ├── clog
        ├── etc2
        ├── ilog
        └── slog

$ du -sh */*/*
12K     data/obdemo/etc3
482G    data/obdemo/sstable
241G    redo/obdemo/clog
12K     redo/obdemo/etc2
4.0K    redo/obdemo/ilog
45M     redo/obdemo/slog

$ df -kh 
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           4.0M     0  4.0M   0% /sys/fs/cgroup
/dev/sda3       846G  796G  6.3G 100% /data

有没有这么夸张,20个表,每个表150w的数据,然后压测5轮,就把空间打满了!!!
另外两个节点空间使用也是97%,目前已经删除sbtest下20个测试表数据。

这种情况怎么破?空间怎么回收? compact能做吗?

全量合并下试试

data文件是预占的,只能扩大,不能缩小,创建集群的时候可以指定一下文件目录大小的参数
https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000000035159

数据盘 846G

$ du -sh */*/*
12K     data/obdemo/etc3
482G    data/obdemo/sstable
241G    redo/obdemo/clog
12K     redo/obdemo/etc2
4.0K    redo/obdemo/ilog
45M     redo/obdemo/slog

目前clog用了241G,现在把log_disk_utilization_threshold设置20,即:846G * 0.2=169G,这种情况clog会收缩吗?


log_disk_utilization_threshold参数控制的是触发循环写的用量
https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000000035155

你要收缩clog的话,可以调整log_size相关的参数:
https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000000035368

现在每个observer节点有850G的空间,有两个用户级别的租户,每个租户的unit是:

obclient [oceanbase]>  select * from __all_unit_config;
+----------------------------+----------------------------+----------------+-----------------+---------+---------+-------------+---------------+---------------------+---------------------+-------------+
| gmt_create                 | gmt_modified               | unit_config_id | name            | max_cpu | min_cpu | memory_size | log_disk_size | max_iops            | min_iops            | iops_weight |
+----------------------------+----------------------------+----------------+-----------------+---------+---------+-------------+---------------+---------------------+---------------------+-------------+
| 2023-09-13 16:54:45.456522 | 2023-09-13 16:54:45.456522 |              1 | sys_unit_config |       1 |       1 |  7516192768 |    7516192768 | 9223372036854775807 | 9223372036854775807 |           1 |
| 2023-09-18 11:32:47.974811 | 2023-09-18 14:51:13.923436 |           1003 | yl_unit_config  |      12 |      12 | 34359738368 |  107374182400 |               10000 |               10000 |           1 |
+----------------------------+----------------------------+----------------+-----------------+---------+---------+-------------+---------------+---------------------+---------------------+-------------+

这里的租户级别的日志盘大小为:log_disk_size=100G

其中数据文件空间设置为502G:
ALTER SYSTEM SET datafile_size = ‘502G’;
ALTER system SET log_disk_utilization_threshold = 20;
注: log_disk_utilization_threshold 用于设置租户日志盘利用率阈值,当租户日志盘使用量超过租户日志盘空间总量乘以该值时,进行日志文件重用。
这样设置下,clog的空间目前还没回收。怎样才能回收呢?

$ du -sh  /data/oceanbase/redo/obdemo/clog/*
211G    /data/oceanbase/redo/obdemo/clog/log_pool
1.5G    /data/oceanbase/redo/obdemo/clog/tenant_1
961M    /data/oceanbase/redo/obdemo/clog/tenant_1003
833M    /data/oceanbase/redo/obdemo/clog/tenant_1004
833M    /data/oceanbase/redo/obdemo/clog/tenant_1005
27G     /data/oceanbase/redo/obdemo/clog/tenant_1006

你要改集群的log_disk_size呢
alter system set log_disk_size=‘xxG’;

目前怎么压测数据,好像日志目录都没有明显的增长