在 PySpark 中根据值相等过滤键/值对的 RDD

use*_*524 1 python filter apache-spark rdd pyspark

给定

[('Project', 10),
 ("Alice's", 11),
 ('in', 401),
 ('Wonderland,', 3),
 ('Lewis', 10),
 ('Carroll', 4),
 ('', 2238),
 ('is', 10),
 ('use', 24),
 ('of', 596),
 ('anyone', 4),
 ('anywhere', 3),
Run Code Online (Sandbox Code Playgroud)

其中配对RDD的值是词频。

我只想返回出现10次的单词。预期产出

 [('Project', 10),
   ('Lewis', 10),
   ('is', 10)]
Run Code Online (Sandbox Code Playgroud)

我尝试使用

rdd.filter(lambda words: (words,10)).collect()
Run Code Online (Sandbox Code Playgroud)

但它仍然显示整个列表。我该怎么办?

Gio*_*ous 5

你的 lambda 函数是错误的;它应该是

rdd.filter(lambda words: words[1] == 10).collect()
Run Code Online (Sandbox Code Playgroud)

例如,

my_rdd = sc.parallelize([('Project', 10), ("Alice's", 11), ('in', 401), ('Wonderland,', 3), ('Lewis', 10)], ('is', 10)]

>>> my_rdd.filter(lambda w: w[1] == 10).collect()
[('Project', 10), ('Lewis', 10), ('is', 10)]
Run Code Online (Sandbox Code Playgroud)