在地图操作中发出多个对

Jef*_*all 10 apache-spark pyspark

假设我有多行电话记录格式:

[CallingUser, ReceivingUser, Duration]
Run Code Online (Sandbox Code Playgroud)

如果我想知道给定用户在电话上的总时间(用户是CallingUser或ReceivingUser的持续时间总和).

实际上,对于给定的记录,我想创建2对(CallingUser, Duration)(ReceivingUser, Duration).

最有效的方法是什么?我可以加2 RDDs,但我不清楚这是一个好方法:

#Sample Data:
callData = sc.parallelize([["User1", "User2", 2], ["User1", "User3", 4], ["User2", "User1", 8]  ])


calls = callData.map(lambda record: (record[0], record[2]))

#The potentially inefficient map in question:
calls += callData.map(lambda record: (record[1], record[2]))


reduce = calls.reduceByKey(lambda a, b: a + b)
Run Code Online (Sandbox Code Playgroud)

Oth*_*ers 11

你想要平面地图.如果你编写一个返回列表的函数,[(record[0], record[2]),(record[1],record[2])]那么你可以平面映射它!

  • 注意提供这样的代码行吗?谢谢. (6认同)

Sol*_*ran 8

使用flatMap(),它适用于获取单个输入并生成多个映射输出.完成代码:

callData = sc.parallelize([["User1", "User2", 2], ["User1", "User3", 4], ["User2", "User1", 8]])

calls = callData.flatMap(lambda record: [(record[0], record[2]), (record[1], record[2])])
print calls.collect()
# prints [('User1', 2), ('User2', 2), ('User1', 4), ('User3', 4), ('User2', 8), ('User1', 8)]

reduce = calls.reduceByKey(lambda a, b: a + b)
print reduce.collect()
# prints [('User2', 10), ('User3', 4), ('User1', 14)]
Run Code Online (Sandbox Code Playgroud)