Spark中如何获取数组列的所有组合?

问题描述 投票:0回答:4

假设我有一个数组列

group_ids

+-------+----------+
|user_id|group_ids |
+-------+----------+
|1      |[5, 8]    |
|3      |[1, 2, 3] |
|2      |[1, 4]    |
+-------+----------+

架构:

root
 |-- user_id: integer (nullable = false)
 |-- group_ids: array (nullable = false)
 |    |-- element: integer (containsNull = false)

我想获得所有对的组合:

+-------+------------------------+
|user_id|group_ids               |
+-------+------------------------+
|1      |[[5, 8]]                |
|3      |[[1, 2], [1, 3], [2, 3]]|
|2      |[[1, 4]]                |
+-------+------------------------+

到目前为止,我使用 UDF 创建了最简单的解决方案:

spark.udf.register("permutate", udf((xs: Seq[Int]) => xs.combinations(2).toSeq))

dataset.withColumn("group_ids", expr("permutate(group_ids)"))

我正在寻找的是通过 Spark 内置函数实现的东西。有没有办法在没有 UDF 的情况下实现相同的代码?

scala apache-spark apache-spark-sql user-defined-functions
4个回答
4
投票

一些高阶函数可以做到这一点。需要 Spark >= 2.4。

val df2 = df.withColumn(
    "group_ids", 
    expr("""
        filter(
            transform(
                flatten(
                    transform(
                        group_ids, 
                        x -> arrays_zip(
                            array_repeat(x, size(group_ids)), 
                            group_ids
                        )
                    )
                ), 
                x -> array(x['0'], x['group_ids'])
            ), 
            x -> x[0] < x[1]
        )
    """)
)


df2.show(false)
+-------+------------------------+
|user_id|group_ids               |
+-------+------------------------+
|1      |[[5, 8]]                |
|3      |[[1, 2], [1, 3], [2, 3]]|
|2      |[[1, 4]]                |
+-------+------------------------+

1
投票

您可以获得列的最大尺寸

group_ids
。然后,使用
(1 - maxSize)
范围上的组合和
when
表达式从原始数组创建子数组组合,最后从结果数组中过滤空元素:

val maxSize = df.select(max(size($"group_ids"))).first.getAs[Int](0)

val newCol = (1 to maxSize).combinations(2)
  .map(c =>
    when(
      size($"group_ids") >= c(1),
      array(element_at($"group_ids", c(0)), element_at($"group_ids", c(1)))
    )
  ).toSeq

df.withColumn("group_ids", array(newCol: _*))
  .withColumn("group_ids", expr("filter(group_ids, x -> x is not null)"))
  .show(false)

//+-------+------------------------+
//|user_id|group_ids               |
//+-------+------------------------+
//|1      |[[5, 8]]                |
//|3      |[[1, 2], [1, 3], [2, 3]]|
//|2      |[[1, 4]]                |
//+-------+------------------------+

0
投票

基于

explode
joins
解决方案

val exploded = df.select(col("user_id"), explode(col("group_ids")).as("e"))

// to have combinations
val joined1 = exploded.as("t1")
                      .join(exploded.as("t2"), Seq("user_id"), "outer")
                      .select(col("user_id"), col("t1.e").as("e1"), col("t2.e").as("e2"))

// to filter out redundant combinations
val joined2 = joined1.as("t1")
                     .join(joined1.as("t2"), $"t1.user_id" === $"t2.user_id" && $"t1.e1" === $"t2.e2" && $"t1.e2"=== $"t2.e1")
                     .where("t1.e1 < t2.e1")
                     .select("t1.*")

// group into array
val result = joined2.groupBy("user_id")
                    .agg(collect_set(struct("e1", "e2")).as("group_ids"))

0
投票

我发现这非常清晰和高效,它在 pyspark 中,但应该不难翻译

df.select("user_id", F.array_sort("group_ids").alias("group_ids")).select(
    # In case of only 2 numbers, try to optimize and skip the transforms
    F.when(F.size("group_ids") == 2, F.array(F.col("group_ids")))
    .otherwise(
        F.flatten(
            F.transform(
                "group_ids",
                lambda id, index: F.transform(
                    F.slice("group_ids", index + 2, F.size("group_ids") - 1),
                    lambda id2: F.array(id, id2),
                ),
            )
        )
    )
    .alias("pairs")
)
© www.soinside.com 2019 - 2024. All rights reserved.