Python数据流模板,使运行时参数可全局访问

问题描述 投票:0回答:1

因此,管道的目标是能够使用运行时变量来打开csv文件并命名BigQuery表。

我只需要能够在全局或ParDo中访问这些变量,例如将其解析为函数。

我尝试创建一个虚拟字符串,然后运行FlatMap来访问运行时参数并使它们全局,尽管它什么也不返回。

例如。

class CustomPipelineOptions(PipelineOptions):

    @classmethod
    def _add_argparse_args(cls, parser):
        parser.add_value_provider_argument(
            '--path',
            type=str,
            help='csv storage path')
        parser.add_value_provider_argument(
            '--table_name',
            type=str,
            help='Table Id')
def run()
    def rewrite_values(element):
        """ Rewrite default env values"""
        # global project_id
        # global campaign_id
        # global organization_id
        # global language
        # global file_path
        try:
            logging.info("File Path with str(): {}".format(str(custom_options.path)))
            logging.info("----------------------------")
            logging.info("element: {}".format(element))
            project_id = str(cloud_options.project)
            file_path = custom_options.path.get()
            table_name = custom_options.table_name.get()

            logging.info("project: {}".format(project_id))
            logging.info("File path: {}".format(file_path))
            logging.info("language: {}".format(table_name))
            logging.info("----------------------------")
        except Exception as e:
            logging.info("Error format----------------------------")
            raise KeyError(e)

        return file_path

    pipeline_options = PipelineOptions()
    cloud_options = pipeline_options.view_as(GoogleCloudOptions)
    custom_options = pipeline_options.view_as(CustomPipelineOptions)
    pipeline_options.view_as(SetupOptions).save_main_session = True

    # Beginning of the pipeline
    p = beam.Pipeline(options=pipeline_options)

    init_data = (p
                 | beam.Create(["Start"])
                 | beam.FlatMap(rewrite_values))

pipeline processing, running pipeline etc.

我可以访问项目变量没问题,尽管其他所有内容都返回为空白。

如果我将custom_options变量设置为全局变量,或者当我将特定的海关对象传递给函数时,例如:| 'Read data' >> beam.ParDo(ReadGcsBlobs(path_file=custom_options.path)),它只会返回诸如RuntimeValueProvider(option: path, type: str, default_value: None)之类的东西。

如果我使用| 'Read data' >> beam.ParDo(ReadGcsBlobs(path_file=custom_options.path.get())),变量是空字符串。

所以从本质上讲,我只需要全局访问这些变量,是否可能?

最后要澄清一点,我不想使用ReadFromText方法,我可以在那里使用运行时变量,虽然将运行时选项合并到从csv文件创建的dict中将是昂贵的,因为我正在使用巨大的csv文件。

python google-cloud-platform google-cloud-dataflow apache-beam
1个回答
0
投票

对我而言,它通过将cloud_optionscustom_options声明为global来起作用:

import argparse, logging

import apache_beam as beam
from apache_beam.options.pipeline_options import GoogleCloudOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions

class CustomPipelineOptions(PipelineOptions):

    @classmethod
    def _add_argparse_args(cls, parser):
        parser.add_value_provider_argument(
            '--path',
            type=str,
            help='csv storage path')
        parser.add_value_provider_argument(
            '--table_name',
            type=str,
            help='Table Id')

def rewrite_values(element):
        """ Rewrite default env values"""
        # global project_id
        # global campaign_id
        # global organization_id
        # global language
        # global file_path
        try:
            logging.info("File Path with str(): {}".format(str(custom_options.path.get())))
            logging.info("----------------------------")
            logging.info("element: {}".format(element))
            project_id = str(cloud_options.project)
            file_path = custom_options.path.get()
            table_name = custom_options.table_name.get()

            logging.info("project: {}".format(project_id))
            logging.info("File path: {}".format(file_path))
            logging.info("language: {}".format(table_name))
            logging.info("----------------------------")
        except Exception as e:
            logging.info("Error format----------------------------")
            raise KeyError(e)

        return file_path


def run(argv=None):
  parser = argparse.ArgumentParser()
  known_args, pipeline_args = parser.parse_known_args(argv)

  global cloud_options
  global custom_options

  pipeline_options = PipelineOptions(pipeline_args)
  cloud_options = pipeline_options.view_as(GoogleCloudOptions)
  custom_options = pipeline_options.view_as(CustomPipelineOptions)
  pipeline_options.view_as(SetupOptions).save_main_session = True

  # Beginning of the pipeline
  p = beam.Pipeline(options=pipeline_options)

  init_data = (p
               | beam.Create(["Start"])
               | beam.FlatMap(rewrite_values))

  result = p.run()
  # result.wait_until_finish

if __name__ == '__main__':
  run()

设置PROJECTBUCKET变量后,我将模板分为:

python script.py \
    --runner DataflowRunner \
    --project $PROJECT \
    --staging_location gs://$BUCKET/staging \
    --temp_location gs://$BUCKET/temp \
    --template_location gs://$BUCKET/templates/global_options

并通过提供pathtable_name选项来执行它:

gcloud dataflow jobs run global_options \
   --gcs-location gs://$BUCKET/templates/global_options \
   --parameters path=test_path,table_name=test_table

并且运行时参数似乎在FlatMap中记录得很好:

enter image description here

© www.soinside.com 2019 - 2024. All rights reserved.