最有效的方式加入两个时间序列

问题描述 投票:4回答:1

想象一下,我有一个这样的表:

 CREATE TABLE time_series (
        snapshot_date DATE,
        sales INTEGER,
PRIMARY KEY (snapshot_date));

使用这样的值:

INSERT INTO time_series SELECT '2017-01-01'::DATE AS snapshot_date,10 AS sales;
INSERT INTO time_series SELECT '2017-01-02'::DATE AS snapshot_date,4 AS sales;
INSERT INTO time_series SELECT '2017-01-03'::DATE AS snapshot_date,13 AS sales;
INSERT INTO time_series SELECT '2017-01-04'::DATE AS snapshot_date,7 AS sales;
INSERT INTO time_series SELECT '2017-01-05'::DATE AS snapshot_date,15 AS sales;
INSERT INTO time_series SELECT '2017-01-06'::DATE AS snapshot_date,8 AS sales;

我希望能够做到这一点:

SELECT a.snapshot_date, 
       AVG(b.sales) AS sales_avg,
       COUNT(*) AS COUNT
  FROM time_series AS a
  JOIN time_series AS b
       ON a.snapshot_date > b.snapshot_date
 GROUP BY a.snapshot_date

这会产生如下结果:

*---------------*-----------*-------*
| snapshot_date | sales_avg | count |
*---------------*-----------*-------*
|  2017-01-02   |   10.0    |    1  |
|  2017-01-03   |   7.0     |    2  |
|  2017-01-04   |   9.0     |    3  |
|  2017-01-05   |   8.5     |    4  |
|  2017-01-06   |   9.8     |    5  |
-------------------------------------

如此示例中的行数非常少,查询运行速度非常快。问题是我必须为数百万行执行此操作,而在Redshift上(类似于Postgres的语法),我的查询需要数天才能运行。它非常慢,但这是我最常见的查询模式之一。我怀疑问题是由于数据中O(n ^ 2)的增长与更优选的O(n)的增长。

我在python中的O(n)实现将是这样的:

rows = [('2017-01-01',10),
        ('2017-01-02',4),
        ('2017-01-03',13),
        ('2017-01-04',7),
        ('2017-01-05',15),
        ('2017-01-06',8)]
sales_total_previous = 0
count = 0
for index, row in enumerate(rows):
    snapshot_date = row[0]
    sales = row[1]
    if index == 0:
        sales_total_previous += sales
        continue
    count += 1
    sales_avg = sales_total_previous / count
    print((snapshot_date,sales_avg, count))
    sales_total_previous += sales

使用这样的结果(与SQL查询相同):

('2017-01-02', 10.0, 1)
('2017-01-03', 7.0, 2)
('2017-01-04', 9.0, 3)
('2017-01-05', 8.5, 4)
('2017-01-06', 9.8, 5)

我正在考虑切换到Apache Spark,以便我可以完成那个python查询,但是几百万行并不是那么大(最多3-4 GB)并且使用具有100 GB RAM的Spark集群似乎矫枉过正。有一种高效且易于阅读的方式我可以在SQL中获得O(n)效率,最好是在Postgres / Redshift中吗?

python sql postgresql amazon-redshift
1个回答
5
投票

你似乎想要:

SELECT ts.snapshot_date, 
       AVG(ts.sales) OVER (ORDER BY ts.snapshot_date) AS sales_avg,
       ROW_NUMBER() OVER (ORDER BY ts.snapshot_date) AS COUNT
FROM time_series ts;

你会发现使用窗口函数效率更高。

© www.soinside.com 2019 - 2024. All rights reserved.