0

列をLEFT JOIN持つ2つのテーブルを作成し、結果をグループ化するクエリがありますtimestamptz

(date_trunc(
    'DAY',
    "table_one"."ttz" AT TIME ZONE
    'America/Los_Angeles'
    )
    -
date_trunc(
    'DAY',
    "table_two"."ttz" AT TIME ZONE
    'America/Los_Angeles')) as period

このような手探りでは、クエリのパフォーマンスが 1 秒 (他の列でグループ化する場合) から 40 ~ 60 秒に低下します。これは既知の問題ですか?回避策はありますか? この動作は、ハードウェア構成に依存しません (最適化された Postgres 構成のサーバー マシンでテストされています)。Citus拡張機能も使用しています。テーブルは日付範囲で分割されていますが、これは関連していません (テスト済み)。

テーブル DLL

CREATE TABLE table_one
(
    user_id VARCHAR,
    ttz     timestamptz
);

クエリ

SELECT date_trunc(
               'DAY',
               table_one."ttz" AT TIME ZONE
               'America/Los_Angeles'
           ) AT TIME ZONE 'America/Los_Angeles' table_one_day,
       (date_trunc(
                'DAY',
                "table_one"."ttz" AT TIME ZONE
                'America/Los_Angeles'
            )
           -
        date_trunc(
                'DAY',
                "table_two"."ttz" AT TIME ZONE
                'America/Los_Angeles'))         period,
       count(DISTINCT table_two.user_id)
FROM table_one
         LEFT JOIN table_two ON table_one.user_id = table_two.user_id
GROUP BY table_one_day, period;

のみでグループ化する場合の計画table_one_day

GroupAggregate  (cost=0.00..0.00 rows=0 width=0) (actual time=760.606..760.606 rows=1 loops=1)
  Output: remote_scan.first_ev_day_trunc, count(DISTINCT remote_scan.count)
  Group Key: remote_scan.first_ev_day_trunc
  ->  Sort  (cost=0.00..0.00 rows=0 width=0) (actual time=760.585..760.585 rows=6 loops=1)
        Output: remote_scan.first_ev_day_trunc, remote_scan.count
        Sort Key: remote_scan.first_ev_day_trunc
        Sort Method: quicksort  Memory: 25kB
        ->  Custom Scan (Citus Real-Time)  (cost=0.00..0.00 rows=0 width=0) (actual time=760.577..760.578 rows=6 loops=1)
              Output: remote_scan.first_ev_day_trunc, remote_scan.count
              Task Count: 32
              Tasks Shown: One of 32
              ->  Task
                    Node: host=94.130.157.249 port=5432 dbname=klonemobile
                    ->  Group  (cost=89.13..89.25 rows=8 width=40) (actual time=0.339..0.343 rows=1 loops=1)
                          Output: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), table_two.user_id
                          Group Key: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), table_two.user_id
                          Buffers: shared hit=9
                          ->  Sort  (cost=89.13..89.15 rows=8 width=40) (actual time=0.337..0.338 rows=24 loops=1)
                                Output: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), table_two.user_id
                                Sort Key: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), table_two.user_id
                                Sort Method: quicksort  Memory: 26kB
                                Buffers: shared hit=9
                                ->  Hash Left Join  (cost=44.44..89.01 rows=8 width=40) (actual time=0.281..0.307 rows=24 loops=1)
                                      Output: timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time"))), table_two.user_id
                                      Hash Cond: ((table_one.user_id)::text = (table_two.user_id)::text)
                                      Join Filter: ((table_one."time" < table_two."time") AND ((table_one."time" + '2 days'::interval day to second) >= table_two."time"))
                                      Rows Removed by Join Filter: 1
                                      Buffers: shared hit=3
                                      ->  Append  (cost=0.00..44.34 rows=8 width=40) (actual time=0.024..0.027 rows=1 loops=1)
                                            Buffers: shared hit=1
                                            ->  Seq Scan on table_one_17955_2004312" table_one  (cost=0.00..22.15 rows=4 width=40) (actual time=0.024..0.024 rows=1 loops=1)
                                                  Output: table_one."time", table_one.user_id
                                                  Filter: ((table_one."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_one."time" < '2019-03-01 11:00:00+03'::timestamp with time zone))
                                                  Buffers: shared hit=1
                                            ->  Seq Scan on table_one_17956_2005560" table_one_1  (cost=0.00..22.15 rows=4 width=40) (actual time=0.002..0.002 rows=0 loops=1)
                                                  Output: table_one_1."time", table_one_1.user_id
                                                  Filter: ((table_one_1."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_one_1."time" < '2019-03-01 11:00:00+03'::timestamp with time zone))
                                      ->  Hash  (cost=44.34..44.34 rows=8 width=40) (actual time=0.044..0.044 rows=25 loops=1)
                                            Output: table_two.user_id, table_two."time"
                                            Buckets: 1024  Batches: 1  Memory Usage: 10kB
                                            Buffers: shared hit=2
                                            ->  Append  (cost=0.00..44.34 rows=8 width=40) (actual time=0.018..0.030 rows=25 loops=1)
                                                  Buffers: shared hit=2
                                                  ->  Seq Scan on table_two_17955_2003480" table_two  (cost=0.00..22.15 rows=4 width=40) (actual time=0.018..0.023 rows=24 loops=1)
                                                        Output: table_two.user_id, table_two."time"
                                                        Filter: ((table_two."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_two."time" < '2019-03-02 11:00:00+03'::timestamp with time zone))
                                                        Buffers: shared hit=1
                                                  ->  Seq Scan on table_two_17956_2005304" table_two_1  (cost=0.00..22.15 rows=4 width=40) (actual time=0.004..0.004 rows=1 loops=1)
                                                        Output: table_two_1.user_id, table_two_1."time"
                                                        Filter: ((table_two_1."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_two_1."time" < '2019-03-02 11:00:00+03'::timestamp with time zone))
                                                        Buffers: shared hit=1
                        Planning Time: 41.035 ms
                        Execution Time: 0.448 ms
Planning Time: 1.846 ms
Execution Time: 760.663 ms

table_one_dayおよび でグループ化する場合の計画period

GroupAggregate  (cost=0.00..0.00 rows=0 width=0) (actual time=46028.822..46028.825 rows=3 loops=1)
  Output: remote_scan.first_ev_day_trunc, remote_scan.period, count(DISTINCT remote_scan.count)
  Group Key: remote_scan.first_ev_day_trunc, remote_scan.period
  Buffers: shared hit=3
  ->  Sort  (cost=0.00..0.00 rows=0 width=0) (actual time=46028.804..46028.804 rows=7 loops=1)
        Output: remote_scan.first_ev_day_trunc, remote_scan.period, remote_scan.count
        Sort Key: remote_scan.first_ev_day_trunc, remote_scan.period
        Sort Method: quicksort  Memory: 25kB
        Buffers: shared hit=3
        ->  Custom Scan (Citus Real-Time)  (cost=0.00..0.00 rows=0 width=0) (actual time=46028.786..46028.788 rows=7 loops=1)
              Output: remote_scan.first_ev_day_trunc, remote_scan.period, remote_scan.count
              Task Count: 32
              Tasks Shown: One of 32
              ->  Task
                    Node: host=94.130.157.249 port=5432 dbname=klonemobile
                    ->  Group  (cost=89.29..89.59 rows=8 width=48) (actual time=0.379..0.384 rows=2 loops=1)
                          Output: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), (date_part('day'::text, (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_two."time"))) - timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))))), table_two.user_id
                          Group Key: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), (date_part('day'::text, (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_two."time"))) - timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))))), table_two.user_id
                          Buffers: shared hit=12
                          ->  Sort  (cost=89.29..89.31 rows=8 width=48) (actual time=0.378..0.379 rows=24 loops=1)
                                Output: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), (date_part('day'::text, (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_two."time"))) - timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))))), table_two.user_id
                                Sort Key: (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))), (date_part('day'::text, (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_two."time"))) - timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time")))))), table_two.user_id
                                Sort Method: quicksort  Memory: 26kB
                                Buffers: shared hit=12
                                ->  Hash Left Join  (cost=44.44..89.17 rows=8 width=48) (actual time=0.284..0.337 rows=24 loops=1)
                                      Output: timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time"))), date_part('day'::text, (timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_two."time"))) - timezone('America/Los_Angeles'::text, date_trunc('DAY'::text, timezone('America/Los_Angeles'::text, table_one."time"))))), table_two.user_id
                                      Hash Cond: ((table_one.user_id)::text = (table_two.user_id)::text)
                                      Join Filter: ((table_one."time" < table_two."time") AND ((table_one."time" + '2 days'::interval day to second) >= table_two."time"))
                                      Rows Removed by Join Filter: 1
                                      Buffers: shared hit=3
                                      ->  Append  (cost=0.00..44.34 rows=8 width=40) (actual time=0.026..0.029 rows=1 loops=1)
                                            Buffers: shared hit=1
                                            ->  Seq Scan on table_one_17955_2004312 table_one  (cost=0.00..22.15 rows=4 width=40) (actual time=0.025..0.026 rows=1 loops=1)
                                                  Output: table_one."time", table_one.user_id
                                                  Filter: ((table_one."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_one."time" < '2019-03-01 11:00:00+03'::timestamp with time zone))
                                                  Buffers: shared hit=1
                                            ->  Seq Scan on table_one_17956_2005560 table_one_1  (cost=0.00..22.15 rows=4 width=40) (actual time=0.002..0.002 rows=0 loops=1)
                                                  Output: table_one_1."time", table_one_1.user_id
                                                  Filter: ((table_one_1."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_one_1."time" < '2019-03-01 11:00:00+03'::timestamp with time zone))
                                      ->  Hash  (cost=44.34..44.34 rows=8 width=40) (actual time=0.026..0.026 rows=25 loops=1)
                                            Output: table_two."time", table_two.user_id
                                            Buckets: 1024  Batches: 1  Memory Usage: 10kB
                                            Buffers: shared hit=2
                                            ->  Append  (cost=0.00..44.34 rows=8 width=40) (actual time=0.011..0.019 rows=25 loops=1)
                                                  Buffers: shared hit=2
                                                  ->  Seq Scan on "table_two_17955_2003480" table_two  (cost=0.00..22.15 rows=4 width=40) (actual time=0.011..0.014 rows=24 loops=1)
                                                        Output: table_two."time", table_two.user_id
                                                        Filter: ((table_two."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_two."time" < '2019-03-02 11:00:00+03'::timestamp with time zone))
                                                        Buffers: shared hit=1
                                                  ->  Seq Scan on table_two_17956_2005304 table_two_1  (cost=0.00..22.15 rows=4 width=40) (actual time=0.003..0.003 rows=1 loops=1)
                                                        Output: table_two_1."time", table_two_1.user_id
                                                        Filter: ((table_two_1."time" >= '2019-02-28 11:00:00+03'::timestamp with time zone) AND (table_two_1."time" < '2019-03-02 11:00:00+03'::timestamp with time zone))
                                                        Buffers: shared hit=1
                        Planning Time: 5899.378 ms
                        Execution Time: 0.531 ms
Planning Time: 2.757 ms
Execution Time: 46028.896 ms

4

1 に答える 1

-1

table_one には実際にはいくつの列がありますか? つまり、実際には 2 つの列しかありませんか? 幅の広いテーブルの場合は、そのテーブルの user_id, ttz にインデックスを作成できます。これにより、データベースは小さなデータ構造、つまりインデックスと大きな構造、つまりテーブルをスキャンできるようになります。

それでも遅い場合は、Oracle などの一部のデータベースでは、インデックスの作成時に式を使用できます。Mysql では、格納されていない仮想列と同様の機能を使用できます-有効な機能インデックス-innodb/

于 2019-04-01T17:15:19.820 に答える