RDD问题与SparkSession有关

问题描述 投票:-2回答:1

我是新手来点燃scala,现在就自己练习。能帮忙解决这个问题吗?

无法在scala中解析符号SparkSession

当我在scala中进行import org.apache.spark.sql.SparkSession练习RDD和转换时。

scala apache-spark bigdata
1个回答
2
投票

您似乎错过了依赖项,因此,如果您使用Maven,可以在pom.xml中添加以下内容

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.project.lib</groupId>
    <artifactId>PROJECT</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.1.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.1.1</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.11</artifactId>
            <version>2.1.1</version>
        </dependency>
    </dependencies>
</project>

但是如果您使用sbt,则在sbt.build中使用以下示例

name := "SparkTest"

version := "0.1"

scalaVersion := "2.11.8"

val sparkVersion = "2.3.0"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-sql" % sparkVersion,
  "org.apache.spark" %% "spark-hive" % sparkVersion

)
© www.soinside.com 2019 - 2024. All rights reserved.