将 Dataproc Serverless 版本从 2.1 升级到 2.2 时出错

问题描述 投票:0回答:1

我已将 Dataproc Serverless 的版本从 2.1 更改为 2.2,现在运行它时出现以下错误:

Exception in thread "main" java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: com.google.cloud.spark.bigquery.BigQueryRelationProvider not a subtype
    at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:593)
    at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1244)
    at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1273)
    at java.base/java.util.ServiceLoader$2.hasNext(ServiceLoader.java:1309)
    at java.base/java.util.ServiceLoader$3.hasNext(ServiceLoader.java:1393)
    at scala.collection.convert.JavaCollectionWrappers$JIteratorWrapper.hasNext(JavaCollectionWrappers.scala:46)
    at scala.collection.StrictOptimizedIterableOps.filterImpl(StrictOptimizedIterableOps.scala:225)
    at scala.collection.StrictOptimizedIterableOps.filterImpl$(StrictOptimizedIterableOps.scala:222)
    at scala.collection.convert.JavaCollectionWrappers$JIterableWrapper.filterImpl(JavaCollectionWrappers.scala:83)
    at scala.collection.StrictOptimizedIterableOps.filter(StrictOptimizedIterableOps.scala:218)
    at scala.collection.StrictOptimizedIterableOps.filter$(StrictOptimizedIterableOps.scala:218)
    at scala.collection.convert.JavaCollectionWrappers$JIterableWrapper.filter(JavaCollectionWrappers.scala:83)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:629)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:697)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:208)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:172)
    at com.dia.repositories.BigQueryAllocationRepository.process(BigQueryRepository.java:90)
    at com.dia.services.LoyaltyAllocationLoader.load(Test.java:18)
    at com.dia.LoyaltyAllocationMain.main(TestMain.java:42)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:569)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1032)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1124)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1133)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

代码没有改变,这些是使用的依赖项:

 <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.13</artifactId>
            <version>3.4.0</version>
            <scope>provided</scope>
            <exclusions>
                <exclusion>
                    <groupId>com.google.protobuf</groupId>
                    <artifactId>protobuf-java</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>com.google.cloud.spark</groupId>
            <artifactId>spark-bigquery-with-dependencies_2.13</artifactId>
            <version>0.34.0</version>
        </dependency>
    </dependencies>

我尝试更新到最新版本,但遇到同样的错误:

<dependencies>
        
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.13</artifactId>
            <version>3.5.1</version>
            <scope>provided</scope>
            <exclusions>
                <exclusion>
                    <groupId>com.google.protobuf</groupId>
                    <artifactId>protobuf-java</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>com.google.cloud.spark</groupId>
            <artifactId>spark-bigquery-with-dependencies_2.13</artifactId>
            <version>0.41.0</version>
        </dependency>
        
    </dependencies>

有人知道会发生什么吗?

apache-spark google-bigquery google-cloud-dataproc
1个回答
0
投票

理想情况下,您应该具有所提供的 BigQuery 依赖范围。 Dataproc 无服务器映像 2+ 提供内置的 gcs 和 bigquery 连接器,这将比您的 pom.xml 具有更高的优先级

© www.soinside.com 2019 - 2024. All rights reserved.