Gaj*_*jus 5 postgresql vacuum update
我有一个表,其中包含需要定期运行的任务列表:
applaudience=> \d+ maintenance_task
Table "public.maintenance_task"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
------------------------------------+--------------------------+-----------+----------+----------------------------------------------+----------+--------------+-------------
id | integer | | not null | nextval('maintenance_task_id_seq'::regclass) | plain | |
nid | citext | | not null | | extended | |
execution_interval | interval | | not null | | plain | |
last_attempted_at | timestamp with time zone | | | now() | plain | |
last_maintenance_task_execution_id | integer | | | | plain | |
disabled_at | timestamp with time zone | | | | plain | |
maximum_execution_duration | interval | | not null | '00:05:00'::interval | plain | |
maximum_concurrent_execution_count | integer | | not null | 0 | plain | |
last_exhausted_at | timestamp with time zone | | not null | now() | plain | |
Indexes:
"maintenance_task_pkey" PRIMARY KEY, btree (id)
"maintenance_task_name_idx" UNIQUE, btree (nid)
Foreign-key constraints:
"maintenance_task_last_maintenance_task_execution_id_fkey" FOREIGN KEY (last_maintenance_task_execution_id) REFERENCES maintenance_task_execution(id) ON DELETE SET NULL
Referenced by:
TABLE "maintenance_task_execution" CONSTRAINT "maintenance_task_execution_maintenance_task_id_fkey" FOREIGN KEY (maintenance_task_id) REFERENCES maintenance_task(id) ON DELETE CASCADE
Options: autovacuum_vacuum_threshold=0, autovacuum_analyze_threshold=0, fillfactor=50
Run Code Online (Sandbox Code Playgroud)
每次选择执行任务时,我们都会更新 的值last_attempted_at
。以下查询用于安排新任务:
CREATE OR REPLACE FUNCTION schedule_maintenance_task()
RETURNS table(maintenance_task_id int)
AS $$
BEGIN
RETURN QUERY
EXECUTE $q$
UPDATE maintenance_task
SET last_attempted_at = now()
WHERE
id = (
WITH
active_maintenance_task_execution_count AS (
SELECT DISTINCT ON (maintenance_task_id)
maintenance_task_id,
execution_count
FROM (
SELECT
id maintenance_task_id,
0 execution_count
FROM maintenance_task
UNION
SELECT
mte1.maintenance_task_id,
count(*) execution_count
FROM maintenance_task_execution mte1
WHERE
mte1.ended_at IS NULL
GROUP BY mte1.maintenance_task_id
) AS t
ORDER BY
maintenance_task_id,
execution_count DESC
)
SELECT mt1.id
FROM maintenance_task mt1
INNER JOIN active_maintenance_task_execution_count amtec1 ON amtec1.maintenance_task_id = mt1.id
WHERE
mt1.disabled_at IS NULL AND
mt1.maximum_concurrent_execution_count >= amtec1.execution_count AND
(
mt1.last_attempted_at < now() - mt1.execution_interval OR
mt1.last_exhausted_at < now() - mt1.execution_interval
)
ORDER BY
mt1.last_attempted_at ASC
LIMIT 1
FOR UPDATE OF mt1 SKIP LOCKED
)
RETURNING id
$q$;
END
$$
LANGUAGE plpgsql
SET work_mem='50MB';
Run Code Online (Sandbox Code Playgroud)
schedule_maintenance_task
查询正在以大约 600/分钟的速率运行。
大约 24 小时后问题开始出现:
applaudience=> EXPLAIN (analyze, buffers)
applaudience-> SELECT id
applaudience-> FROM maintenance_task;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
Seq Scan on maintenance_task (cost=0.00..7715.86 rows=286886 width=4) (actual time=3.675..385.042 rows=31 loops=1)
Buffers: shared hit=9455
Planning time: 0.236 ms
Execution time: 385.060 ms
(4 rows)
applaudience=> SELECT *
applaudience-> FROM pg_stat_all_tables
applaudience-> WHERE schemaname = 'public' AND relname = 'maintenance_task';
relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze | vacuum_count | autovacuum_count | analyze_count | autoanalyze_count
----------+------------+------------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+-------------+-------------------------------+--------------+-------------------------------+--------------+------------------+---------------+-------------------
22903432 | public | maintenance_task | 163230 | 5060130 | 5571795 | 7988441 | 0 | 185359 | 0 | 172989 | 148568 | 138285 | 9733 | | 2018-12-09 11:00:33.978177+00 | | 2018-12-09 10:01:07.945327+00 | 0 | 6922 | 0 | 1416
(1 row)
Run Code Online (Sandbox Code Playgroud)
死亡元组的数量增长到 100k+。一个简单的 seq 扫描需要读取 9k+ 缓冲区才能获取 31 行。
这是一个VACUUM VERBOSE maintenance_task
日志:
INFO: vacuuming "public.maintenance_task"
INFO: index "maintenance_task_pkey" now contains 9555 row versions in 331 pages
DETAIL: 0 index row versions were removed.
282 index pages have been deleted, 282 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO: index "maintenance_task_name_idx" now contains 9555 row versions in 787 pages
DETAIL: 0 index row versions were removed.
690 index pages have been deleted, 690 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO: "maintenance_task": found 0 removable, 145247 nonremovable row versions in 2459 out of 4847 pages
DETAIL: 145217 dead row versions cannot be removed yet, oldest xmin: 928967630
There were 180 unused item pointers.
Skipped 1 page due to buffer pins, 2387 frozen pages.
0 pages are entirely empty.
CPU: user: 0.05 s, system: 0.00 s, elapsed: 0.34 s.
INFO: vacuuming "pg_toast.pg_toast_22903432"
INFO: index "pg_toast_22903432_index" now contains 0 row versions in 1 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO: "pg_toast_22903432": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 928967630
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM
Run Code Online (Sandbox Code Playgroud)
可以采取什么措施来防止死元组数量不断增加/调度查询速度减慢?
我最初的判断是错误的。
\n\n感谢我在 Freenode 上获得的帮助,我能够了解根本原因并解决表不断膨胀的问题。
\n\n我需要纠正自己的第一件事是对VACUUM
工作原理的主要理解。VACUUM
无法从不在与表关联的磁盘关系文件末尾的缓冲区回收磁盘空间。但是,VACUUM
可以在与表关联的磁盘关系文件末尾重新组织缓冲区,即如果有许多更新和VACUUM
创建新缓冲区之前正在运行,则新行将就地存储在同一缓冲区中已删除行的数量和新缓冲区的数量不会增加。
为了能够VACUUM
将空间返回给操作系统,需要满足以下条件:
我的缓冲区不断增长这一事实表明其中一个条件未得到满足。
\n\n因此,首先要检查的是最旧的实时交易是什么:
\n\napplaudience=> SELECT age(backend_xmin), (now() - xact_start), query\napplaudience-> FROM pg_stat_activity\napplaudience-> WHERE backend_xmin IS NOT NULL\napplaudience-> ORDER BY age(backend_xmin) DESC\napplaudience-> LIMIT 1;\n age | ?column? | query\n---------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n 4644352 | 12:31:38.895198 | -- Metabase +\n | | SELECT "public"."http_response"."body" AS "body" FROM "public"."http_response" GROUP BY "public"."http_response"."body" ORDER BY "public"."http_response"."body" ASC LIMIT 5000\n(1 row)\n
Run Code Online (Sandbox Code Playgroud)\n\n事实证明,在我的例子中,有一个运行时间非常长的查询阻止 VACUUM 从缓冲区中删除死行。通过查看vacuum verbose
日志也可以推测出这一点:
DETAIL: 145217 dead row versions cannot be removed yet, oldest xmin: 928967630\n
Run Code Online (Sandbox Code Playgroud)\n\n日志中的此条目指向极高频更新或长时间运行的事务,从而阻止清理。
\n\n为了解决这个问题,我必须:
\n\nVACUUM FULL maintenance_task
一次。旧答案:
\n\n似乎没有办法在不导致膨胀的情况下更新元组。
\n\nVACUUM FULL
根据我所读到的所有内容,运行例程或等效的表重写变体似乎是不可避免的。
\n\n\n\n
VACUUM
当表因大规模更新或删除活动而包含大量死行版本时,普通格式可能无法令人满意。如果您有这样一个表,并且需要回收其占用的多余\n 磁盘空间,则需要使用VACUUM FULL
, 或\nCLUSTER
或 的表重写变体之一ALTER TABLE
。\n 这些命令重写整个新的副本表并\n为其构建新索引。所有这些选项都需要独占锁。请注意,它们还临时使用大约等于表大小的额外磁盘空间,因为在新副本之前无法释放表和索引的旧副本已完成。
https://www.postgresql.org/docs/current/routine-vacuuming.html
\n\n假设我对上述内容是正确的,那么解决方案是尽量减少完整表重写的影响。到这个程度,我已经发现了pg_repack
。pg_repack
\xe2\x80\x93 使用最少的锁重新组织 PostgreSQL 数据库中的表。
由于maintenance_task
表很小(少于 50 行),我应该能够运行pg_repack
每小时运行一次,同时对调度工作人员的影响最小。
不幸的是,当表包含大量定期更新的行时,此解决方案效果不佳。
\n 归档时间: |
|
查看次数: |
2997 次 |
最近记录: |