PySpark RDD - 获得Rank,进入JSON

问题描述 投票:0回答:1

我有一个Hive查询,它返回数据:

Date,Name,Score1,Score2,Avg_Score
1/1/2018,A,10,20,15
1/1/2018,B,20,20,20
1/1/2018,C,15,10,12.5
1/1/2018,D,11,12,11.5
1/1/2018,E,21,29,25
1/1/2018,F,10,21,15.5

我使用hive_context.sql(my_query).rdd将其转换为RDD。我的最终目标是将其转换为基于Avg_score的降序排名的JSON格式,如下所示:

Scores=
[
    {
        "Date": '1/1/2018',
        "Name": 'A',
        "Avg_Score": 15,
        "Rank":4
    },
    {
        "Date": '1/1/2018',
        "Name": 'B',
        "Avg_Score": 20,
        "Rank":2
    }
]

作为获得排名的第一步,我尝试实施this approach但我一直遇到像AttributeError: 'RDD' object has no attribute 'withColumn'这样的错误

我怎么做到这一点?

json apache-spark pyspark apache-spark-sql
1个回答
1
投票

这是因为您正在RDD级别工作。如果要使用Dataframe API,则必须使用数据集(或Dataframe)。正如你在评论中提到的那样,你可以删除.rdd转换并使用asDict来获得最终结果。

df = sc.parallelize([
  ("1/1/2018","A",10,20,15.0),
  ("1/1/2018","B",20,20,20.0),
  ("1/1/2018","C",15,10,12.5),
  ("1/1/2018","D",11,12,11.5),
  ("1/1/2018","E",21,29,25.0),
  ("1/1/2018","F",10,21,15.5)]).toDF(["Date","Name","Score1","Score2","Avg_Score"])

from pyspark.sql import Window
import pyspark.sql.functions as psf

w = Window.orderBy(psf.desc("Avg_Score"))

rddDict = (df
  .withColumn("rank",psf.dense_rank().over(w))
  .drop("Score1","Score2")
  .rdd
  .map(lambda row: row.asDict()))

结果

>>> rddDict.take(1)
[{'Date': u'1/1/2018', 'Avg_Score': 25, 'Name': u'E', 'rank': 1}]

但请注意使用没有分区的Window函数的警告:

18/08/13 11:44:32 WARN window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
© www.soinside.com 2019 - 2024. All rights reserved.