在pyspark中使用dataframe show方法时出错

问题描述 投票:0回答:1

我正在尝试使用 pandas 和 pyspark 从 BigQuery 读取数据。我能够获取数据,但在将其转换为 Spark DataFrame 时不知何故出现低于错误的情况。

py4j.protocol.Py4JJavaError: An error occurred while calling o28.showString.
: java.lang.IllegalStateException: Could not find TLS ALPN provider; no working netty-tcnative, Conscrypt, or Jetty NPN/ALPN available
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:258)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:401)
    at com.google.cloud.spark.bigquery.repackaged.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:444)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:223)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:169)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:156)
    at com.google.cloud.spark.bigquery.repackaged.com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157) 

以下是环境详情

Python version : 3.7
Spark version : 2.4.3
Java version : 1.8

代码如下

import google.auth
import pyspark
from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession , SQLContext
from google.cloud import bigquery


# Currently this only supports queries which have at least 10 MB of results
QUERY = """ SELECT * FROM test limit 1 """

#spark = SparkSession.builder.appName('Query Results').getOrCreate()
sc = pyspark.SparkContext()
bq = bigquery.Client()

print('Querying BigQuery')
project_id = ''
query_job = bq.query(QUERY,project=project_id)

# Wait for query execution
query_job.result()

df = SQLContext(sc).read.format('bigquery') \
    .option('dataset', query_job.destination.dataset_id) \
    .option('table', query_job.destination.table_id)\
    .option("type", "direct")\
    .load()

df.show()

我正在寻求一些帮助来解决这个问题。

python-3.x apache-spark pyspark google-bigquery
1个回答
0
投票

我设法找到引用此link的更好的解决方案,下面是我的工作代码:

在编写下面的代码之前,在 python 库中安装 pandas_gbq 包。

import pandas_gbq
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession

project_id = "<your-project-id>"
query = """ SELECT * from testSchema.testTable"""
athletes = pandas_gbq.read_gbq(query=query, project_id=project_id,dialect = 'standard')


# Get a reference to the Spark Session
sc = SparkContext()
spark = SparkSession(sc)

# convert from Pandas to Spark
sparkDF = spark.createDataFrame(athletes)

# perform an operation on the DataFrame
print(sparkDF.count())

sparkDF.show()

希望它对某人有帮助!继续 pysparking :)

© www.soinside.com 2019 - 2024. All rights reserved.