火花scala运行

lea*_*ner 4 scala apache-spark apache-spark-mllib

嗨,我是新来的火花和斯卡拉.我在spark scala提示符下运行scala代码.该程序很好,它显示"定义模块MLlib",但它不在屏幕上打印任何东西.我做错了什么?有没有其他方法在scala shell中运行此程序spark并获得输出?

 import org.apache.spark.{SparkConf, SparkContext}
    import org.apache.spark.mllib.classification.LogisticRegressionWithSGD
    import org.apache.spark.mllib.feature.HashingTF
    import org.apache.spark.mllib.regression.LabeledPoint

object MLlib {

  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName(s"Book example: Scala")
    val sc = new SparkContext(conf)

    // Load 2 types of emails from text files: spam and ham (non-spam).
    // Each line has text from one email.
    val spam = sc.textFile("/home/training/Spam.txt")
    val ham = sc.textFile("/home/training/Ham.txt")

    // Create a HashingTF instance to map email text to vectors of 100 features.
    val tf = new HashingTF(numFeatures = 100)
    // Each email is split into words, and each word is mapped to one feature.
    val spamFeatures = spam.map(email => tf.transform(email.split(" ")))
    val hamFeatures = ham.map(email => tf.transform(email.split(" ")))

    // Create LabeledPoint datasets for positive (spam) and negative (ham) examples.
    val positiveExamples = spamFeatures.map(features => LabeledPoint(1, features))
    val negativeExamples = hamFeatures.map(features => LabeledPoint(0, features))
    val trainingData = positiveExamples ++ negativeExamples
    trainingData.cache() // Cache data since Logistic Regression is an iterative algorithm.

    // Create a Logistic Regression learner which uses the LBFGS optimizer.
    val lrLearner = new LogisticRegressionWithSGD()
    // Run the actual learning algorithm on the training data.
    val model = lrLearner.run(trainingData)

    // Test on a positive example (spam) and a negative one (ham).
    // First apply the same HashingTF feature transformation used on the training data.
    val posTestExample = tf.transform("O M G GET cheap stuff by sending money to ...".split(" "))
    val negTestExample = tf.transform("Hi Dad, I started studying Spark the other ...".split(" "))
    // Now use the learned model to predict spam/ham for new emails.
    println(s"Prediction for positive test example: ${model.predict(posTestExample)}")
    println(s"Prediction for negative test example: ${model.predict(negTestExample)}")

    sc.stop()
  }
}
Run Code Online (Sandbox Code Playgroud)

Ton*_*res 5

有几件事:

object在Spark shell中定义了自己,因此main不会立即调用该类.在定义以下内容后,您必须明确地调用它object:

MLlib.main(Array())

事实上,如果你继续使用shell/REPL,你可以完全取消对象; 你可以直接定义这个功能.例如:

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.classification.LogisticRegressionWithSGD
import org.apache.spark.mllib.feature.HashingTF
import org.apache.spark.mllib.regression.LabeledPoint

def MLlib {
    //the rest of your code
}
Run Code Online (Sandbox Code Playgroud)

但是,您不应该SparkContext在shell中初始化它.从文档:

在Spark shell中,已经在名为sc的变量中为您创建了一个特殊的解释器感知SparkContext.制作自己的SparkContext将无法正常工作

因此,您必须从代码中删除该位,或者将其编译到jar中并使用它运行它 spark-submit