pyspark 在 python 中写入 csv 时默认修剪所有字段

问题描述 投票:0回答:1

我正在尝试使用

spark 3.3 , Scala 2
python
代码将数据集写入 csv 文件,默认情况下它会修剪所有字符串字段。例如,对于下面的列值:

" Text123"," jacob "

csv 中的输出为:

"Text123","jacob"

我不想修剪任何字符串字段。

下面是我的代码:

args = getResolvedOptions(sys.argv, ['target_BucketName', 'JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

# Convert DynamicFrame to DataFrame 
df_app = AWSGlueDataCatalog_node.toDF()

# Repartition the DataFrame to control output files APP
df_repartitioned_app = df_app.repartition(10)  

# Check for empty partitions and write only if data is present
if not df_repartitioned_app.rdd.isEmpty():
    df_repartitioned_app.write.format("csv") \
        .option("compression", "gzip") \
        .option("header", "true") \
        .option("delimiter", "|") \
        .save(output_path_app)
python aws-glue scala-2.10 apache-spark-3.0
1个回答
0
投票

ignoreLeadingWhiteSpace
ignoreTrailingWhiteSpace
选项设置为 false :

    df_repartitioned_app.write.format("csv") \
        .option("compression", "gzip") \
        .option("header", "true") \
        .option("delimiter", "|") \
        .option("ignoreLeadingWhiteSpace", "false") \
        .option("ignoreTrailingWhiteSpace", "false") \
        .save(output_path_app)
© www.soinside.com 2019 - 2024. All rights reserved.