我是新手,我有以下格式的数据
类别,子类别,名称
Food,Thai,Restaurant A
Food,Thai,Restaurant B
Food, Chinese, Restaurant C
Lodging, Hotel, Hotel A
我希望数据采用以下格式
{Category : Food , Subcategories : [ {subcategory : Thai , names : [Restaurant A , Restaurant B] }, {subcategory : Chinese , names : [Restaurant C]}]}
{Category : Hotel , Subcategories : [ {subcategory : Lodging , names : [Hotel A] }]}
有人可以帮我解决这个问题,使用pyspark RDD吗?
谢谢!
这里有用的解决方案:
创建一个窗口函数来收集名称groupBy Category和Subcategory
from pyspark.sql import functions as F
from pyspark.sql import Window
groupByCateWind = Window.partitionBy("Category", "Subcategory")
finalDf = df.withColumn("names", F.collect_list("Name").over(groupByCateWind)) \
.withColumn("Subcategories", F.struct("Subcategory", "names")) \
.groupBy("Category").agg(F.collect_set("Subcategories").alias("Subcategories")).toJSON()
输出如下:
+---------------------------------------------------------------------------------------------------------------------------------------------------------+
|{"Category":"Food","Subcategories":[{"Subcategory":"Thai","names":["Restaurant A","Restaurant B"]},{"Subcategory":" Chinese","names":[" Restaurant C"]}]}|
|{"Category":"Lodging","Subcategories":[{"Subcategory":" Hotel","names":[" Hotel A"]}]} |
+---------------------------------------------------------------------------------------------------------------------------------------------------------+