我是pyspark的新手。我在pyspark写这段代码:
def filterOut2(line):
return [x for x in line if x != 2]
filtered_lists = data.map(filterOut2)
但我得到这个错误:
'list' object has no attribute 'map'
如何在PySpark中专门对我的数据执行map
操作,这种方式允许我将数据过滤到我的条件评估为真的那些值?
map(filterOut2, data)
的作品:
>>> data = [[1,2,3,5],[1,2,5,2],[3,5,2,8],[6,3,1,2],[5,3,2,5],[4,1,2,5] ]
... def filterOut2(line):
... return [x for x in line if x != 2]
... list(map(filterOut2, data))
...
[[1, 3, 5], [1, 5], [3, 5, 8], [6, 3, 1], [5, 3, 5], [4, 1, 5]]
map()只需1个参数(给定2个)
看起来你重新定义了map
。试试__builtin__.map(filterOut2, data)
。
或者,使用列表理解:
>>> [filterOut2(line) for line in data]
[[1, 3, 5], [1, 5], [3, 5, 8], [6, 3, 1], [5, 3, 5], [4, 1, 5]]