PySpark超时异常

问题描述 投票:1回答:1

我正在Google Dataproc上运行pySpark,并且正在尝试使用网络图进行大规模工作。

这是我的配置

import pyspark
from pyspark.sql import SparkSession

conf = pyspark.SparkConf().setAll([('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar'),
                                   ('spark.jars.packages', 'graphframes:graphframes:0.7.0-spark2.3-s_2.11')])

spark = SparkSession.builder \
  .appName('testing bq')\
  .config(conf=conf) \
  .getOrCreate()

但是,当我从网络图的图框中运行“标签传播”算法时,由于超时,它总是返回Py4JJavaError

result = g_df.labelPropagation(maxIter=5)

错误:

Py4JJavaError: An error occurred while calling o287.run.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 197.0 failed 4 times, most recent failure: Lost task 0.3 in stage 197.0 (TID 7247, cluster-network-graph-w-7.c.geotab-bi.internal, executor 50): ExecutorLostFailure (executor 50 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 127971 ms

如何从PySpark更改此超时参数?这会影响什么?

pyspark google-cloud-dataproc
1个回答
0
投票
所有网络交互的默认超时。如果未配置,则此配置将代替spark.core.connection.ack.wait.timeout,spark.storage.blockManagerSlaveTimeoutMs,spark.shuffle.io.connectionTimeout,spark.rpc.askTimeout或spark.rpc.lookupTimeout代替。

请参见Spark Configuration

© www.soinside.com 2019 - 2024. All rights reserved.