Spark:按ID创建JSON组

问题描述 投票:0回答:1

我有dataFrame unionDataDF和样本数据

+---+------------------+----+
| id|              data| key|
+---+------------------+----+
|  1|[{"data":"data1"}]|key1|
|  2|[{"data":"data2"}]|key1|
|  1|[{"data":"data1"}]|key2|
|  2|[{"data":"data2"}]|key2|
+---+------------------+----+

其中id是IntType,数据是JsonType,键是StringType。

我想通过网络为每个id发送数据。例如,id“1”的输出数据如下:

{
    "id": 1,
    "data": {
        "key1": [{
            "data": "data1"
        }],
        "key2": [{
            "data": "data1"
        }]
    }
}

我怎么能这样做?

用于创建unionDataDF的示例代码

val dummyDataDF= Seq((1, "data1"), (2, "data2")).toDF("id", "data");
val key1JsonDataDF = dummyDataDF.withColumn("data", to_json(struct( $"data"))).groupBy("id").agg(collect_list($"data").alias("data")).withColumn("key", lit("key1"))
val key2JsonDataDF = dummyDataDF.withColumn("data", to_json(struct( $"data"))).groupBy("id").agg(collect_list($"data").alias("data")).withColumn("key", lit("key2"))
val unionDataDF = key1JsonDataDF.union(key2JsonDataDF)

版:

Spark: 2.2
Scala: 2.11
scala apache-spark apache-spark-sql rdd
1个回答
0
投票

就像是

unionDataDF
  .groupBy("id")
  .agg(collect_list(struct("key", "data")).alias("grouped"))
  .show(10, false)

输出:

+---+--------------------------------------------------------+
|id |grouped                                                 |
+---+--------------------------------------------------------+
|1  |[[key1, [{"data":"data1"}]], [key2, [{"data":"data1"}]]]|
|2  |[[key1, [{"data":"data2"}]], [key2, [{"data":"data2"}]]]|
+---+--------------------------------------------------------+

© www.soinside.com 2019 - 2024. All rights reserved.