Google-cloud-dataflow:无法通过`WriteToBigQuery / BigQuerySink`使用`BigQueryDisposition.WRITE_TRUNCATE'将json数据插入bigquery

问题描述 投票:2回答:1

鉴于数据集如下

{"slot":"reward","result":1,"rank":1,"isLandscape":false,"p_type":"main","level":1276,"type":"ba","seqNum":42544}
{"slot":"reward_dlg","result":1,"rank":1,"isLandscape":false,"p_type":"main","level":1276,"type":"ba","seqNum":42545}
...more type json data here

我尝试过滤那些json数据并使用python sdk将它们插入bigquery,如下所示

ba_schema = 'slot:STRING,result:INTEGER,play_type:STRING,level:INTEGER'

class ParseJsonDoFn(beam.DoFn):
    B_TYPE = 'tag_B'
    def process(self, element):
        text_line = element.trip()
        data = json.loads(text_line)

        if data['type'] == 'ba':
            ba = {'slot': data['slot'], 'result': data['result'], 'p_type': data['p_type'], 'level': data['level']}
            yield pvalue.TaggedOutput(self.B_TYPE, ba)

def run():
    parser = argparse.ArgumentParser()
    parser.add_argument('--input',
                      dest='input',
                      default='data/path/data',
                      help='Input file to process.')
    known_args, pipeline_args = parser.parse_known_args(argv)
    pipeline_args.extend([
      '--runner=DirectRunner',
      '--project=project-id',
      '--job_name=data-job',
    ])
    pipeline_options = PipelineOptions(pipeline_args)
    pipeline_options.view_as(SetupOptions).save_main_session = True
    with beam.Pipeline(options=pipeline_options) as p:
        lines = p | ReadFromText(known_args.input)

        multiple_lines = (
            lines
            | 'ParseJSON' >> (beam.ParDo(ParseJsonDoFn()).with_outputs(
                                      ParseJsonDoFn.B_TYPE)))

        b_line = multiple_lines.tag_B
        (b_line
            | "output_b" >> beam.io.WriteToBigQuery(
                                          'temp.ba',
                                          schema = B_schema,
                                          write_disposition = beam.io.BigQueryDisposition.WRITE_TRUNCATE,
                                          create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED
                                        ))

调试日志显示

INFO:root:finish <DoOperation output_b/WriteToBigQuery output_tags=['out'], receivers=[ConsumerSet[output_b/WriteToBigQuery.out0, coder=WindowedValueCoder[FastPrimitivesCoder], len(consumers)=0]]>
DEBUG:root:Successfully wrote 2 rows.

似乎这两个与type:ba的数据被插入bigquery表temp.ba。但是,我跑了

select * from `temp.ba`  limit 100;

此表中没有数据temp.ba

我的代码有什么问题或者我错过了什么吗?


更新:

谢谢@Eric Sc​​hmidt的回答,我知道初始数据可能有些滞后。但是,运行上述脚本5分钟后,表中还没有数据。

enter image description here

当我试图在write_disposition = beam.io.BigQueryDisposition.WRITE_TRUNCATE中删除BigQuerySink

    | "output_b" >> beam.io.Write(
                      beam.io.BigQuerySink(
                          table = 'ba',
                          dataset = 'temp',
                          project = 'project-id',
                          schema = ba_schema,
                          #write_disposition = beam.io.BigQueryDisposition.WRITE_TRUNCATE,
                          create_disposition = beam.io.BigQueryDisposition.CREATE_IF_NEEDED
                        )
                    ))

这两个记录可以立即找到。

表信息是

enter image description here

也许我没有意识到初始数据可用性滞后的意义。有人可以给我更多信息吗?

google-bigquery google-cloud-dataflow apache-beam
1个回答
2
投票

需要考虑的两件事:

1)Direct(本地)运行器使用流式插入。初始数据可用性滞后see this post

2)确保您完全符合要流入的项目。使用BigQuerySink()project =“foo”,dataset =“bar”,table =“biz”。

我怀疑你的问题是#1。

© www.soinside.com 2019 - 2024. All rights reserved.