apache beam 2.7.0在utf-8编码法语字符中崩溃

问题描述 投票:3回答:1

我正在尝试将一个来自谷歌云平台的csv写入数据存储区,包含法语字符/口音,但我有一个关于解码的错误消息。

尝试从“latin-1”到“utf-8”的编码和解码没有成功(使用unicode,unicodedata和编解码器)后,我尝试手动更改内容...

我正在使用的Os,默认情况下使用“ascii”编码,我手动将“Anaconda3 / envs / py27 / lib / site.py”更改为utf-8。

def setencoding():
    """Set the string encoding used by the Unicode implementation.  The
    default is 'ascii', but if you're willing to experiment, you can
    change this."""
    encoding = "utf-8" # Default value set by _PyUnicode_Init()
    sys.setdefaultencoding("utf-8")

我已尝试在本地使用测试文件,通过打印然后将带有重音符号的字符串写入文件,并且它有效!

string='naïve café'
test_decode=codecs.utf_8_decode(string, "strict", True)[0]
print(test_decode)

with  open('./test.txt', 'w') as outfile:
    outfile.write(test_decode)

但是apache_beam没有运气......

然后我尝试手动更改“/usr/lib/python2.7/encodings/utf_8.py”并将“ignore”而不是“strict”放入codecs.utf_8_decode

def decode(input, errors='ignore'):
    return codecs.utf_8_decode(input, errors, True)

但我已经意识到apache_beam不使用此文件,或者至少不考虑任何更改

任何想法如何处理它?

请在下面找到错误消息

Traceback (most recent call last):
  File "etablissementsFiness.py", line 146, in <module>
    dataflow(run_locally)
  File "etablissementsFiness.py", line 140, in dataflow
    | 'Write entities into Datastore' >> WriteToDatastore(PROJECT)
  File "C:\Users\Georges\Anaconda3\envs\py27\lib\site-packages\apache_beam\pipel
ine.py", line 414, in __exit__
    self.run().wait_until_finish()
  File "C:\Users\Georges\Anaconda3\envs\py27\lib\site-packages\apache_beam\runne
rs\dataflow\dataflow_runner.py", line 1148, in wait_until_finish
    (self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow
pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py",
line 642, in do_work
    work_executor.execute()
  File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", lin
e 156, in execute
    op.start()
  File "dataflow_worker/native_operations.py", line 38, in dataflow_worker.nativ
e_operations.NativeReadOperation.start
    def start(self):
  File "dataflow_worker/native_operations.py", line 39, in dataflow_worker.nativ
e_operations.NativeReadOperation.start
    with self.scoped_start_state:
  File "dataflow_worker/native_operations.py", line 44, in dataflow_worker.nativ
e_operations.NativeReadOperation.start
    with self.spec.source.reader() as reader:
  File "dataflow_worker/native_operations.py", line 48, in dataflow_worker.nativ
e_operations.NativeReadOperation.start
    for value in reader:
  File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/textio.py", line 2
01, in read_records
    yield self._coder.decode(record)
  File "/usr/local/lib/python2.7/dist-packages/apache_beam/coders/coders.py", li
ne 307, in decode
    return value.decode('utf-8')
  File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 190: invalid continuation byte
python-2.7 google-cloud-platform google-cloud-datastore apache-beam
1个回答
0
投票

此错误:“UnicodeDecodeError:'utf8'编解码器无法解码字节”意味着您的CSV文件仍包含一些错误的字节,解码器无法将其识别为UTF字符。

对此最简单的解决方案是convert并在提交数据存储区之前验证csv输入文件不包含UTF8错误。 Simple online UTF8 validation can check it

如果你需要在python中将latin-1转换为UTF8,你可以这样做:

string.decode('iso-8859-1').encode('utf8')
© www.soinside.com 2019 - 2024. All rights reserved.