rdd.zipWithIndex() 在非常大的数据集上抛出 IllegalArgumentException

问题描述 投票:0回答:2

我正在 Azure Databricks 中运行 python 笔记本。尝试使用 rdd.zipWithIndex() 添加行号时出现 IllegalArgumentException 错误。该文件大小为 2.72 GB,有 1238951 行(我认为文本编辑器对于这么大的文件会表现得很奇怪)。运行了4个多小时才失败。我想知道我们是否达到了某种大小限制,因为异常是 IllegalArgumentException。我想知道如何防止这种异常,和/或任何使其更快的方法。我想我可能必须把它分成更小的文件。如有任何帮助,我们将不胜感激。

代码片段

runKey = "cca2e0f0-bec0-408a-a5cb-341d26e8b7e0" # this is new id for every file
filePath = "/mnt/my_file_path/my_file.txt"
rdd = sc.textFile(filePath)
rdd = rdd.zipWithIndex().map(lambda line: "{}{}{}{}{}".format(str(runKey), delimiter, str(line[1]+1), delimiter, line[0]))

错误输出

  File "<command-3893172145851236>", line 26, in OpenFileRDD
    rdd = rdd.zipWithIndex().map(lambda line: "{}{}{}{}{}".format(str(runKey), delimiter, str(line[1]+1), delimiter, line[0]))
  File "/databricks/spark/python/pyspark/rdd.py", line 2524, in zipWithIndex
    nums = self.mapPartitions(lambda it: [sum(1 for i in it)]).collect()
  File "/databricks/spark/python/pyspark/rdd.py", line 967, in collect
    sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
  File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
    return_value = get_return_value(
  File "/databricks/spark/python/pyspark/sql/utils.py", line 117, in deco
    return f(*a, **kw)
  File "/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 1234573.0 failed 4 times, most recent failure: Lost task 4.3 in stage 1234573.0 (TID 46064376) (10.0.2.5 executor 5455): java.lang.IllegalArgumentException
    at java.nio.CharBuffer.allocate(CharBuffer.java:334)
    at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:810)
    at org.apache.hadoop.io.Text.decode(Text.java:412)
    at org.apache.hadoop.io.Text.decode(Text.java:389)
    at org.apache.hadoop.io.Text.toString(Text.java:280)
    at org.apache.spark.SparkContext.$anonfun$textFile$2(SparkContext.scala:1065)
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:442)
    at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:797)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:521)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2241)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:313)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2978)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2925)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2919)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2919)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1357)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1357)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1357)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3186)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3127)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3115)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1123)
    at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2500)
    at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1071)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:454)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:1069)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:260)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.GeneratedMethodAccessor6189.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.IllegalArgumentException
    at java.nio.CharBuffer.allocate(CharBuffer.java:334)
    at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:810)
    at org.apache.hadoop.io.Text.decode(Text.java:412)
    at org.apache.hadoop.io.Text.decode(Text.java:389)
    at org.apache.hadoop.io.Text.toString(Text.java:280)
    at org.apache.spark.SparkContext.$anonfun$textFile$2(SparkContext.scala:1065)
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:442)
    at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:797)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:521)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2241)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:313)
python pyspark databricks rdd azure-databricks
2个回答
0
投票

我尝试过在单行中使用超过

965
个字符, 效果很好。 即单行 15295 个字符。

enter image description here

因此,当字符串对象包含超过 最大限制。即 2^31 - 1。 单行中的字符数可以超过此最大限制。 检查一次是否正确。

所以,我创建了单行长度为 2^31-1 的文件, 并尝试了你的代码,花了 1.2 小时并给出了内核重新启动错误。

我建议拆分更长的行并添加您的信息。

with  open("/dbfs/output.txt", 'r') as file_data:
    for line in file_data:
        print(f"before split {len(line)}")
        data = line.split("0")
        print(f"After split {len(data)}")

在这里,我在您的案例中使用

0
作为分割条件,使用它有意义地分割线 根据您的要求。

enter image description here

这只是文本文件中的单行数据。 检查您的文件中的字符串有多大,或者您可以根据记录大小拆分行,如下所示。

charset = "UTF-8"
recordSize = 500
df = sc.binaryRecords("dbfs:/output.txt", recordSize).map(lambda record: (str(record, charset),)).toDF(schema=["value"])

enter image description here


0
投票

我得出的结论是文件的最后一行已损坏。我无法在大多数文本编辑器中打开它,并且在我能够打开该文件的少数文本编辑器中,它不稳定或无响应。不过,我下载了 Ultra Edit 的试用版,速度非常快且流畅。 Ultra Edit 轻松打开文件,并显示了我认为最后一行末尾隐藏/损坏的字符。它们由小方块表示,在其他文本编辑器中不会显示。确切地说,有 1,779,974,601 个小方块!我建议使用 Ultra Edit 有效地搜索文件中的损坏字符。

© www.soinside.com 2019 - 2024. All rights reserved.