我是Spark的新手,想要了解MapReduce是如何在幕后完成的,以确保我正确使用它.这篇文章提供了一个很好的答案,但我的结果似乎没有遵循所描述的逻辑.我在命令行上运行Scala中的Spark快速入门指南.当我正确地添加线长时,事情就好了.总线长度为1213:
scala> val textFile = sc.textFile("README.md")
scala> val linesWithSpark = textFile.filter(line => line.contains("Spark"))
scala> val linesWithSparkLengths = linesWithSpark.map(s => s.length)
scala> linesWithSparkLengths.foreach(println)
Result:
14
78
73
42
68
17
62
45
76
64
54
74
84
29
136
77
77
73
70
scala> val totalLWSparkLength = linesWithSparkLengths.reduce((a,b) => a+b)
totalLWSparkLength: Int = 1213
Run Code Online (Sandbox Code Playgroud)
当我略微调整它以使用(ab)而不是(a + b)时,
scala> val totalLWSparkTest = linesWithSparkLengths.reduce((a,b) => a-b)
Run Code Online (Sandbox Code Playgroud)
根据这篇文章中的逻辑,我期望-1185 :
List(14,78,73,42,68,17,62,45,76,64,54,74,84,29,136,77,77,73,70).reduce( (x,y) => x - y )
Step 1 …Run Code Online (Sandbox Code Playgroud)