如何根据作为映射的列值过滤spark数据框条目

问题描述 投票:1回答:1

我有一个这样的数据框

+-------+------------------------+
|key    |                    data|
+-------+------------------------+
|     61|[a -> b, c -> d, e -> f]|
|     71|[a -> 1, c -> d, e -> f]|
|     81|[c -> d, e -> f]        |
|     91|[x -> b, y -> d, e -> f]|
|     11|[a -> a, c -> b, e -> f]|
|     21|[a -> a, c -> x, e -> f]|
+-------+------------------------+

我要过滤其数据列映射包含键'a'value of key 'a' is 'a'的行。因此,以下数据帧是所需的输出。

+-------+------------------------+
|key    |                    data|
+-------+------------------------+
|     11|[a -> a, c -> b, e -> f]|
|     21|[a -> a, c -> x, e -> f]|
+-------+------------------------+

我尝试将值转换为地图,但出现此错误

== SQL ==
Map
^^^

  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitPrimitiveDataType$1.apply(AstBuilder.scala:1673)
  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitPrimitiveDataType$1.apply(AstBuilder.scala:1651)
  at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:108)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.visitPrimitiveDataType(AstBuilder.scala:1651)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.visitPrimitiveDataType(AstBuilder.scala:49)
  at org.apache.spark.sql.catalyst.parser.SqlBaseParser$PrimitiveDataTypeContext.accept(SqlBaseParser.java:13779)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.typedVisit(AstBuilder.scala:55)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.org$apache$spark$sql$catalyst$parser$AstBuilder$$visitSparkDataType(AstBuilder.scala:1645)
  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleDataType$1.apply(AstBuilder.scala:90)
  at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleDataType$1.apply(AstBuilder.scala:90)
  at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:108)
  at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleDataType(AstBuilder.scala:89)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parseDataType$1.apply(ParseDriver.scala:40)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parseDataType$1.apply(ParseDriver.scala:39)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:98)
  at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseDataType(ParseDriver.scala:39)
  at org.apache.spark.sql.Column.cast(Column.scala:1017)
  ... 49 elided

如果我只想基于列'key'进行过滤,则可以执行df.filter(col("key") === 61)。但是问题是,该值是一个Map。

是否有df.filter(col("data").toMap.contains("a") && col("data").toMap.get("a") === "a")之类的东西>>

我有一个像这样的数据框+ ------- + ------------------------ + | key |数据| + ------- + ------------------------ + | 61 | [a-> b,c-> d,e-> f] | | 71 | [a-> 1,...

scala dataframe apache-spark apache-spark-sql bigdata
1个回答
0
投票

您可以像这样df.filter(col("data.x") === "a")进行过滤,其中x

© www.soinside.com 2019 - 2024. All rights reserved.