微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

PostgreSQL查询运行速度更快,索引扫描,但引擎选择哈希连接

查询
SELECT "replays_game".*
FROM "replays_game"
INNER JOIN
 "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 50027

如果我设置SET enable_seqscan = off,那么它会做得很快,这是:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 nested Loop  (cost=0.00..27349.80 rows=3395 width=72) (actual time=28.726..65.056 rows=3398 loops=1)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.019..2.412 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..5.41 rows=1 width=72) (actual time=0.017..0.017 rows=1 loops=3398)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 65.437 ms

但是没有可怕的enable_seqscan,它选择做一个较慢的事情:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=7330.18..18145.24 rows=3395 width=72) (actual time=92.380..535.422 rows=3398 loops=1)
   Hash Cond: (replays_playeringame.game_id = replays_game.id)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.020..2.899 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Hash  (cost=3668.08..3668.08 rows=151208 width=72) (actual time=90.842..90.842 rows=151208 loops=1)
         Buckets: 1024  Batches: 32 (originally 16)  Memory Usage: 1025kB
         ->  Seq Scan on replays_game  (cost=0.00..3668.08 rows=151208 width=72) (actual time=0.020..29.061 rows=151208 loops=1)
 Total runtime: 535.821 ms

以下是相关指标:

Index "public.replays_game_pkey"
 Column |  Type   | DeFinition
--------+---------+------------
 id     | integer | id
primary key,btree,for table "public.replays_game"

Index "public.replays_playeringame_player_id"
  Column   |  Type   | DeFinition
-----------+---------+------------
 player_id | integer | player_id
btree,for table "public.replays_playeringame"

所以我的问题是,我在做错什么是Postgres错误估计两种加入方式的相对成本?我在成本估计中看到,它认为哈希加入会更快.而其对索引加入成本的估计则减少了500.

如何给Postgres更多的线索?在运行所有上述操作之前,我已经运行了一个VACUUM ANALYZE.

有趣的是,如果我为具有较小游戏次数的玩家运行此查询,则Postgres会选择进行索引扫描嵌套循环.所以关于大型游戏的一些事情会引发这种不必要的行为,其中相对估计的成本与实际的估计成本不一致.

最后,我应该使用Postgres吗?我不希望成为数据库调优的专家,所以我正在寻找一个数据库,它将以良好的开发人员的关注程度来表现得相当出色,而不是专门的DBA.恐怕如果我坚持使用Postgres,我将会有一个稳定的问题,这将使我成为一名Postgres专家,也许另一个DB将更加宽容一个更随意的方法.

Postgres专家(RhodiumToad)审查了我的完整数据库设置(http://pastebin.com/77QuiQSp),并建议设置cpu_tuple_cost = 0.1.这给了一个戏剧性的加速:http://pastebin.com/nTHvSHVd

或者,切换到MysqL也很好地解决了这个问题.我在我的OS X盒子上认安装了MysqL和Postgres,MysqL的速度要快两倍,通过反复执行查询来比较“加热”的查询.在“冷”查询中,即第一次执行给定的查询时,MysqL的速度是5到150倍.冷查询的执行对于我的特定应用来说是非常重要的.

就我而言,最大的问题仍然很出色 – Postgres需要比MysqL更好的配置和运行方式吗?例如,考虑到这里的评论者没有提出任何建议.

我的猜测是你使用的是default_page_cost = 4,这太高了,使得索引扫描太贵了.

我尝试用这个脚本重建2个表:

CREATE TABLE replays_game (
    id integer NOT NULL,PRIMARY KEY (id)
);

CREATE TABLE replays_playeringame (
    player_id integer NOT NULL,game_id integer NOT NULL,PRIMARY KEY (player_id,game_id),CONSTRAINT replays_playeringame_game_fkey
        FOREIGN KEY (game_id) REFERENCES replays_game (id)
);

CREATE INDEX ix_replays_playeringame_game_id
    ON replays_playeringame (game_id);

-- 150k games
INSERT INTO replays_game
SELECT generate_series(1,150000);

-- ~150k players,~2 games each
INSERT INTO replays_playeringame
select trunc(random() * 149999 + 1),generate_series(1,150000);

INSERT INTO replays_playeringame
SELECT *
FROM
    (
        SELECT
            trunc(random() * 149999 + 1) as player_id,150000) as game_id
    ) AS t
WHERE
    NOT EXISTS (
        SELECT 1
        FROM replays_playeringame
        WHERE
            t.player_id = replays_playeringame.player_id
            AND t.game_id = replays_playeringame.game_id
    )
;

-- the heavy player with 3000 games
INSERT INTO replays_playeringame
select 999999,3000);

认值为4:

game=# set random_page_cost = 4;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                     QUERY PLAN                                                                      
-----------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=1483.54..4802.54 rows=3000 width=4) (actual time=3.640..110.212 rows=3000 loops=1)
   Hash Cond: (replays_game.id = replays_playeringame.game_id)
   ->  Seq Scan on replays_game  (cost=0.00..2164.00 rows=150000 width=4) (actual time=0.012..34.261 rows=150000 loops=1)
   ->  Hash  (cost=1446.04..1446.04 rows=3000 width=4) (actual time=3.598..3.598 rows=3000 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 106kB
         ->  Bitmap Heap Scan on replays_playeringame  (cost=67.54..1446.04 rows=3000 width=4) (actual time=0.586..2.041 rows=3000 loops=1)
               Recheck Cond: (player_id = 999999)
               ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..66.79 rows=3000 width=0) (actual time=0.560..0.560 rows=3000 loops=1)
                     Index Cond: (player_id = 999999)
 Total runtime: 110.621 ms

降低到2:

game=# set random_page_cost = 2;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                  QUERY PLAN                                                                   
-----------------------------------------------------------------------------------------------------------------------------------------------
 nested Loop  (cost=45.52..4444.86 rows=3000 width=4) (actual time=0.418..27.741 rows=3000 loops=1)
   ->  Bitmap Heap Scan on replays_playeringame  (cost=45.52..1424.02 rows=3000 width=4) (actual time=0.406..1.502 rows=3000 loops=1)
         Recheck Cond: (player_id = 999999)
         ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..44.77 rows=3000 width=0) (actual time=0.388..0.388 rows=3000 loops=1)
               Index Cond: (player_id = 999999)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..0.99 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=3000)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 28.542 ms
(8 rows)

如果使用SSD,我会进一步降低到1.1.

至于你最后一个问题,我真的认为你应该坚持postgresql.我有postgresql和mssql的经验,我需要把三分之一的努力放在后面,以便执行一半以及前者.

原文地址:https://www.jb51.cc/postgresql/192711.html

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐