带3个参数的zip功能

问题描述 投票:0回答:1

我想在Spark SQL表中转置多个列 我发现这个解决方案只有两列,我想知道如何使用zip列函数与三列varA, varB and varC.

import org.apache.spark.sql.functions.{udf, explode}

val zip = udf((xs: Seq[Long], ys: Seq[Long]) => xs.zip(ys))

df.withColumn("vars", explode(zip($"varA", $"varB"))).select(
   $"userId", $"someString",
   $"vars._1".alias("varA"), $"vars._2".alias("varB")).show

这是我的数据帧架构:

`root
 |-- owningcustomerid: string (nullable = true)
 |-- event_stoptime: string (nullable = true)
 |-- balancename: string (nullable = false)
 |-- chargedvalue: string (nullable = false)
 |-- newbalance: string (nullable = false)
`

我试过这段代码:

    val zip = udf((xs: Seq[String], ys: Seq[String], zs: Seq[String]) => (xs, ys, zs).zipped.toSeq)

df.printSchema

val df4=df.withColumn("vars", explode(zip($"balancename", $"chargedvalue",$"newbalance"))).select(
   $"owningcustomerid", $"event_stoptime",
   $"vars._1".alias("balancename"), $"vars._2".alias("chargedvalue"),$"vars._2".alias("newbalance"))

我收到了这个错误:

cannot resolve 'UDF(balancename, chargedvalue, newbalance)' due to data type mismatch: argument 1 requires array<string> type, however, '`balancename`' is of string type. argument 2 requires array<string> type, however, '`chargedvalue`' is of string type. argument 3 requires array<string> type, however, '`newbalance`' is of string type.;;

'项目[拥有客户#1085,event_stoptime#1086,balancename#1159,chargevalue#1160,newbalance#1161,爆炸(UDF(​​平衡名称#1159,chargevalue#1160,newbalance#1161))AS vars#1167]

scala apache-spark hadoop apache-spark-sql bigdata
1个回答
1
投票

在Scala中,你可以使用Tuple3.zipped

val zip = udf((xs: Seq[Long], ys: Seq[Long], zs: Seq[Long]) => 
  (xs, ys, zs).zipped.toSeq)

zip($"varA", $"varB", $"varC")

特别是在Spark SQL(> = 2.4)中,您可以使用arrays_zip函数:

import org.apache.spark.sql.functions.arrays_zip

arrays_zip($"varA", $"varB", $"varC")

但是你必须注意你的数据不包含array<string>但是包含普通的strings - 因此不允许使用Spark arrays_zip或explode,你应该首先解析你的数据。

© www.soinside.com 2019 - 2024. All rights reserved.