我目前有一个python数据流作业,其末端接收是对BigQuery的PCollection写入。它失败了,出现以下错误:
Workflow failed. Causes: S01:XXXX+XXX+Write/WriteToBigQuery/NativeWrite failed., BigQuery import job "dataflow_job_XXXXXX" failed., BigQuery job "dataflow_job_XXXXXX" in project "XXXXXX" finished with error(s): errorResult: Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 19; errors: 1
要获得更详细的错误报告,我正在运行:
bq --format=prettyjson show -j dataflow_job_XXXXXX
它显示的东西(有一堆错误,这只是其中之一):
{
"location": "gs://XXXXX/XXXXXX/tmp/XXXXX/10002237702794672370/dax-tmp-2019-02-05_20_14_50-18341731408970037725-S01-0-5144bb700f6a9f0b/-shard--try-00d3c2c24d5b0371-endshard.json",
"message": "Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 11; errors: 1. Please look into the errors[] collection for more details.",
"reason": "invalid"
},
然后我去查找特定的分片以查看错误的PCollection行以及我需要做什么来过滤这些行或修复我的错误:
gsutil ls gs://XXXXX/XXXXXX/tmp/XXXXX/10002237702794672370/dax-tmp-2019-02-05_20_14_50-18341731408970037725-S01-0-5144bb700f6a9f0b/-shard--try-00d3c2c24d5b0371-endshard.json
但该命令返回:
CommandException: One or more URLs matched no objects.
调试作业的最佳实践是什么(需要多个小时btw)?我现在的想法是将PCollection以JSON格式写入非临时位置的GCS并尝试自己摄取。
对于您的错误类型,我执行以下操作:
本文可能会为您提供处理无效输入的一些想法。