Rol*_*Rol 6 postgresql query-optimization cost-based-optimizer
是什么让错误的行估计成为 SQL 查询性能的痛点?我\xe2\x80\x99m有兴趣了解其内部原因。
\n通常,错误的行估计实际上会选择正确的计划,而好查询和坏查询之间的唯一区别是估计的行数。
\n为什么经常出现如此巨大的性能差异?
\n是因为 Postgres 使用行估计来分配内存吗?
\nPostgresql优化器是一个基于成本的优化器(CBO),查询将按照执行计划中最小成本来执行,并且成本将通过表的统计来计算。
为什么 Postgres 中的坏行估计速度很慢?
因为错误的统计可能会选择错误的执行计划。这是一个例子
有两个表,T1有 20000000 行,T2有 1000000 行。
CREATE TABLE T1 (
ID INT NOT NULL PRIMARY KEY,
val INT NOT NULL,
col1 UUID NOT NULL,
col2 UUID NOT NULL,
col3 UUID NOT NULL,
col4 UUID NOT NULL,
col5 UUID NOT NULL,
col6 UUID NOT NULL
);
INSERT INTO T1
SELECT i,
RANDOM() * 1000000,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid
FROM generate_series(1,20000000) i;
CREATE TABLE T2 (
ID INT NOT NULL PRIMARY KEY,
val INT NOT NULL,
col1 UUID NOT NULL,
col2 UUID NOT NULL,
col3 UUID NOT NULL,
col4 UUID NOT NULL,
col5 UUID NOT NULL,
col6 UUID NOT NULL
);
INSERT INTO T2
SELECT i,
RANDOM() * 1000000,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid,
md5(random()::text || clock_timestamp()::text)::uuid
FROM generate_series(1,1000000) i;
Run Code Online (Sandbox Code Playgroud)
当我们join在表上执行时,我们将得到一个可能使用的执行计划Merge JOIN
EXPLAIN (ANALYZE,TIMING ON,BUFFERS ON)
SELECT t1.*
FROM T1
INNER JOIN T2 ON t1.id = t2.id
WHERE t1.id < 1000000
Run Code Online (Sandbox Code Playgroud)
"Gather (cost=1016.37..30569.85 rows=53968 width=104) (actual time=0.278..837.297 rows=999999 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=38273 read=21841"
" -> Merge Join (cost=16.37..24173.05 rows=22487 width=104) (actual time=11.993..662.770 rows=333333 loops=3)"
" Merge Cond: (t2.id = t1.id)"
" Buffers: shared hit=38273 read=21841"
" -> Parallel Index Only Scan using t2_pkey on t2 (cost=0.42..20147.09 rows=416667 width=4) (actual time=0.041..69.947 rows=333333 loops=3)"
" Heap Fetches: 0"
" Buffers: shared hit=6 read=2732"
" -> Index Scan using t1_pkey on t1 (cost=0.44..48427.24 rows=1079360 width=104) (actual time=0.041..329.874 rows=999819 loops=3)"
" Index Cond: (id < 1000000)"
" Buffers: shared hit=38267 read=19109"
"Planning:"
" Buffers: shared hit=4 read=8"
"Planning Time: 0.228 ms"
"Execution Time: 906.760 ms"
Run Code Online (Sandbox Code Playgroud)
但是当我更新很多行时,如下所示,100000000当 id 小于时让 id 加1000000
"Gather (cost=1016.37..30569.85 rows=53968 width=104) (actual time=0.278..837.297 rows=999999 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=38273 read=21841"
" -> Merge Join (cost=16.37..24173.05 rows=22487 width=104) (actual time=11.993..662.770 rows=333333 loops=3)"
" Merge Cond: (t2.id = t1.id)"
" Buffers: shared hit=38273 read=21841"
" -> Parallel Index Only Scan using t2_pkey on t2 (cost=0.42..20147.09 rows=416667 width=4) (actual time=0.041..69.947 rows=333333 loops=3)"
" Heap Fetches: 0"
" Buffers: shared hit=6 read=2732"
" -> Index Scan using t1_pkey on t1 (cost=0.44..48427.24 rows=1079360 width=104) (actual time=0.041..329.874 rows=999819 loops=3)"
" Index Cond: (id < 1000000)"
" Buffers: shared hit=38267 read=19109"
"Planning:"
" Buffers: shared hit=4 read=8"
"Planning Time: 0.228 ms"
"Execution Time: 906.760 ms"
Run Code Online (Sandbox Code Playgroud)
我们再次使用相同的查询,它将使用Merge JOIN,但应该有另一个更好的选择而不是Merge JOIN。
如果你没有达到autovacuum_analyze_threshold(autovacuum_analyze_threshold默认值0.1意味着我们需要创建超过10%deadtuple,postgresql将自动更新统计数据)
update T1
set id = id + 100000000
where id < 1000000
Run Code Online (Sandbox Code Playgroud)
"Gather (cost=1016.37..30707.83 rows=53968 width=104) (actual time=51.403..55.517 rows=0 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" Buffers: shared hit=8215"
" -> Merge Join (cost=16.37..24311.03 rows=22487 width=104) (actual time=6.736..6.738 rows=0 loops=3)"
" Merge Cond: (t2.id = t1.id)"
" Buffers: shared hit=8215"
" -> Parallel Index Only Scan using t2_pkey on t2 (cost=0.42..20147.09 rows=416667 width=4) (actual time=0.024..0.024 rows=1 loops=3)"
" Heap Fetches: 0"
" Buffers: shared hit=8"
" -> Index Scan using t1_pkey on t1 (cost=0.44..50848.71 rows=1133330 width=104) (actual time=6.710..6.710 rows=0 loops=3)"
" Index Cond: (id < 1000000)"
" Buffers: shared hit=8207"
"Planning:"
" Buffers: shared hit=2745"
"Planning Time: 3.938 ms"
"Execution Time: 55.550 ms"
Run Code Online (Sandbox Code Playgroud)
当我们使用手动时,ANALYZE T1;这意味着更新T1表统计信息,然后再次查询,查询将得到Nested Loop哪个更好Merge JOIN
"QUERY PLAN"
"Nested Loop (cost=0.86..8.90 rows=1 width=104) (actual time=0.004..0.004 rows=0 loops=1)"
" Buffers: shared hit=3"
" -> Index Scan using t1_pkey on t1 (cost=0.44..4.46 rows=1 width=104) (actual time=0.003..0.003 rows=0 loops=1)"
" Index Cond: (id < 1000000)"
" Buffers: shared hit=3"
" -> Index Only Scan using t2_pkey on t2 (cost=0.42..4.44 rows=1 width=4) (never executed)"
" Index Cond: (id = t1.id)"
" Heap Fetches: 0"
"Planning:"
" Buffers: shared hit=20"
"Planning Time: 0.232 ms"
"Execution Time: 0.027 ms"
Run Code Online (Sandbox Code Playgroud)
小结论:
表中精确的统计数据将帮助优化器通过表中精确的COST获得正确的执行计划。
这是一个帮助我们搜索last_analyze和last_vacuum上次的脚本。
EXPLAIN (ANALYZE,TIMING ON,BUFFERS ON)
SELECT t1.*
FROM T1
INNER JOIN T2 ON t1.id = t2.id
WHERE t1.id < 1000000
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3727 次 |
| 最近记录: |