Kafka主题对象到spark数据帧转换并写入HDFS

问题描述 投票:0回答:1

我正在尝试在 Spark 编码中创建 kafka 消费者,在创建时出现异常。我的目标是我必须从主题中读取内容并需要写入 HDFS 路径。

scala> df2.printSchema()
root
 |-- key: binary (nullable = true)
 |-- value: binary (nullable = true)
 |-- topic: string (nullable = true)
 |-- partition: integer (nullable = true)
 |-- offset: long (nullable = true)
 |-- timestamp: timestamp (nullable = true)
 |-- timestampType: integer (nullable = true)

scala> print(df1)
[key: binary, value: binary ... 5 more fields]

我不会在该主题中提供任何输入,即使它采用这 6 个值作为输入。

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.types.StringType
import org.apache.spark.sql.types.StructField
import spark.implicits._
object Read {  
  def main(args: Array[String]): Unit = {  

    val spark = SparkSession.builder()
    .appName("spark Oracle Kafka")
    .master("local")
    .getOrCreate()
val df2 = spark
  .read
  .format("kafka")
  .option("kafka.bootstrap.servers", "kafka server ip address i have given")
  .option("subscribe", "topic20190904")
  .load()

print(df1)//it is return some values 
df2.show() it's throwing exception i hope it's not dataframe.
df2.write.parquet("/user/xrrn5/abcd")// I am getting java.lang.AbstractMethodError
java.lang.AbstractMethodError  at rg.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala)
scala dataframe apache-spark apache-kafka apache-kafka-connect
1个回答
0
投票

要将数据从 Kafka 写入 HDFS,您实际上不需要任何代码 - 您只需使用 Kafka Connect,它是 Apache Kafka 的一部分。这是一个示例配置:

{
  "name": "hdfs-sink",
  "config": {
    "connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
    "tasks.max": "1",
    "topics": "test_hdfs",
    "hdfs.url": "hdfs://localhost:9000",
    "flush.size": "3",
    "name": "hdfs-sink"
  }
}

请参阅此处,了解有关连接器的文档。

© www.soinside.com 2019 - 2024. All rights reserved.