容器以非零退出代码 50 退出,用于将 Spark Dataframe 保存到 hdfs

问题描述 投票:0回答:1

我正在 Pyspark 上运行一个小脚本,从 hbase 表中提取一些数据并创建一个 Pyspark 数据框。我正在尝试将数据帧保存回本地 hdfs,但遇到了 exit 50 错误。 我能够对相对较小的数据帧成功执行相同的操作,但无法对大文件执行相同的操作。 我很乐意分享任何代码片段,并希望得到任何帮助。此外,SparkUI 的整个环境可以作为屏幕截图进行共享。

这是我的 Spark(2.0.0) 属性的配置(此处显示为字典)。部署在yarn-client上。

configuration={'spark.executor.memory':'4g',
        'spark.executor.instances':'32',
        'spark.driver.memory':'12g',
        'spark.yarn.queue':'default' 
       }

获取数据框后,我尝试将其另存为:

df.write.save('user//hdfs//test_df',format = 'com.databricks.spark.csv',mode = 'append')

以下错误块不断重复,直到作业失败。我相信这可能是一个 OOM 错误,但我尝试过提供多达 128 个执行程序,每个执行程序有 16GB 内存,但无济于事。 任何解决方法将不胜感激。

Container exited with a non-zero exit code 50

17/09/25 15:19:35 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 64, fslhdppdata2611.imfs.micron.com): ExecutorLostFailure (executor 42 exited caused by one of the running tasks) Reason: Container marked as failed: container_e37_1502313369058_6420779_01_000043 on host: fslhdppdata2611.imfs.micron.com. Exit status: 50. Diagnostics: Exception from container-launch.
Container id: container_e37_1502313369058_6420779_01_000043
Exit code: 50
Stack trace: org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException: Launch container failed
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:109)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:89)
    at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:392)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Shell output: main : command provided 1
main : run as user is hdfsprod
main : requested yarn user is hdfsprod
Getting exit code file...
Creating script paths...
Writing pid file...
Writing to tmp file /opt/hadoop/data/03/hadoop/yarn/local/nmPrivate/application_1502313369058_6420779/container_e37_1502313369058_6420779_01_000043/container_e37_1502313369058_6420779_01_000043.pid.tmp
Writing to cgroup task files...
Creating local dirs...
Launching container...
Getting exit code file...
Creating script paths...
apache-spark pyspark hdfs apache-spark-sql hadoop-yarn
1个回答
0
投票

退出代码似乎来自

org.apache.spark.util.SparkExitCode
(基于这个答案)。


因此,退出代码

50
应表示
UNCAUGHT_EXCEPTION
。 🥲

最新问题
© www.soinside.com 2019 - 2024. All rights reserved.