如何在spark mapPartitions()中使生成器工作?

问题描述 投票:0回答:1

我试图在spark中使用mapPartiton来处理大型文本语料库:假设我们有一些看似如下的半处理数据:

    text_1 = [['A', 'B', 'C', 'D', 'E'],
    ['F', 'E', 'G', 'A', 'B'],
    ['D', 'E', 'H', 'A', 'B'],
    ['A', 'B', 'C', 'F', 'E'],
    ['A', 'B', 'C', 'J', 'E'],
    ['E', 'H', 'A', 'B', 'C'],
    ['E', 'G', 'A', 'B', 'C'],
    ['C', 'F', 'E', 'G', 'A'],
    ['C', 'D', 'E', 'H', 'A'],
    ['C', 'J', 'E', 'H', 'A'],
    ['H', 'A', 'B', 'C', 'F'],
    ['H', 'A', 'B', 'C', 'J'],
    ['B', 'C', 'F', 'E', 'G'],
    ['B', 'C', 'D', 'E', 'H'],
    ['B', 'C', 'F', 'E', 'K'],
    ['B', 'C', 'J', 'E', 'H'],
    ['G', 'A', 'B', 'C', 'F'],
    ['J', 'E', 'H', 'A', 'B']]

每个字母都是一个字。我也有词汇:

    V = ['D','F','G','C','J','K']
    text_1RDD = sc.parallelize(text_1)

我想在spark中运行以下内容:

    filtered_lists = text_1RDD.mapPartitions(partitions)

    filtered_lists.collect()

我有这个功能:

    def partitions(list_of_lists,vc):

            for w in vc:

                iterator = []
                for sub_list in list_of_lists:

                    if w in sub_list:
                        iterator.append(sub_list)

        yield (w,len(iterator))

如果我像这样运行它:

    c = partitions(text_1,V)
    for item in c:
        print(item)

它返回正确的计数

    ('D', 4)
    ('F', 7)
    ('G', 5)
    ('C', 15)
    ('J', 5)
    ('K', 1)

但是,我不知道如何在spark中运行它:

    filtered_lists = text_1RDD.mapPartitions(partitions)

    filtered_lists.collect()

它只有一个参数,在Spark中运行时会产生很多错误......

但即使我在分区函数中编码词汇表:

    def partitionsV(list_of_lists):
            vc = ['D','F','G','C','J','K']
            for w in vc:

                iterator = []
                for sub_list in list_of_lists:

                    if w in sub_list:
                        iterator.append(sub_list)

        yield (w,len(iterator))

..我懂了:

    filtered_lists = text_1RDD.mapPartitions(partitionsV)

    filtered_lists.collect()

输出:

     [('D', 2),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0),
     ('D', 0),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0),
     ('D', 1),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0),
     ('D', 1),
     ('F', 0),
     ('G', 0),
     ('C', 0),
     ('J', 0),
     ('K', 0)]

显然,发电机没有按预期工作。我完全陷入困境。我很新兴。如果有人能向我解释这里发生了什么,我将非常感激...

python apache-spark pyspark bigdata
1个回答
0
投票

这是另一个字数统计问题,而mapPartitions不是这项工作的工具:

from operator import add

v = set(['D','F','G','C','J','K'])

result = text_1RDD.flatMap(v.intersection).map(lambda x: (x, 1)).reduceByKey(add)

结果是

for x in result.sortByKey().collect(): 
    print(x) 
('C', 15)
('D', 4)
('F', 7)
('G', 5)
('J', 5)
('K', 1)
© www.soinside.com 2019 - 2024. All rights reserved.