apache_beam[gcp] - ParDo 的侧输入

apache_beam[gcp] - Sideinput to ParDo

我无法找到使用 apache_beam[gcp] 2.4.0 版本的 ParDo 函数添加辅助输入的正确方法。

我的管道是

pipeline
     | "Load" >> ReadFromText("query.txt") 
     | "Count Words" >> CountWordsTransform()

class CountWordsTransform(beam.PTransform):
    def expand(self, p_collection):
    anotherPipleline = beam.Pipeline(runner="DataflowRunner", argv=[
        "--staging_location", ("%s/staging" % gcs_path),
        "--temp_location", ("%s/temp" % gcs_path),
        "--output", ("%s/output" % gcs_path),
        "--setup_file", "./setup.py"
    ])
       value2 = anotherPipleline | 'create2' >> Create([("a", 1), ("b", 2), ("c", 3)])
       return (p_collection
                | "Split" >> (beam.ParDo(FindWords(), beam.pvalue.AsDict(value2))))

class FindWords() 定义为:

class FindWords(beam.DoFn):
    def process(self, element, values):
        import re as regex
        return regex.findall(r"[A-Za-z\']+", element)

我收到以下错误:

'NoneType' object has no attribute 'parts'

您正在复合转换中创建一个单独的管道来创建您的侧输入 - 这会导致问题,因为集合不应在不同的管道之间共享。

相反,您可以尝试在同一管道中创建您的侧输入并将其作为参数传递给您的转换。

例如

values = pipeline | "Get pcol for side input" >> beam.Create([("a", 1), ("b", 2), ("c", 3)])

pipeline 
    | "Load" >> beam.io.ReadFromText('gs://bucket/words.txt')
    | "Count Words" >> CountWordsTransform(values)

class CountWordsTransform(beam.PTransform):

    def __init__(self, values):
        self.values = values

    def expand(self, p_collection):
        return p_collection | "Split" >> (beam.ParDo(FindWords(), beam.pvalue.AsDict(self.values)))

上面用 2.4.0 测试过