Chr*_*ter 8 scala apache-spark apache-spark-mllib
我刚刚开始使用ML和Apache Spark,所以我一直在尝试基于Spark示例的线性回归.除了示例中的示例之外,我似乎无法为任何数据生成适当的模型,并且无论输入数据如何,截距始终为0.0.
我已经准备了一个基于该功能的简单训练数据集:
y =(2*x1)+(3*x2)+4
即我期望截距为4,权重为(2,3).
如果我在原始数据上运行LinearRegressionWithSGD.train(...),模型是:
Model intercept: 0.0, weights: [NaN,NaN]
Run Code Online (Sandbox Code Playgroud)
并且预测都是NaN:
Features: [1.0,1.0], Predicted: NaN, Actual: 9.0
Features: [1.0,2.0], Predicted: NaN, Actual: 12.0
Run Code Online (Sandbox Code Playgroud)
等等
如果我首先缩放数据,我得到:
Model intercept: 0.0, weights: [17.407863391511754,2.463212481736855]
Features: [1.0,1.0], Predicted: 19.871075873248607, Actual: 9.0
Features: [1.0,2.0], Predicted: 22.334288354985464, Actual: 12.0
Features: [1.0,3.0], Predicted: 24.797500836722318, Actual: 15.0
Run Code Online (Sandbox Code Playgroud)
等等
要么我做错了,要么我不明白这个模型的输出应该是什么,那么有人可以建议我在哪里出错吗?
我的代码如下:
// Load and parse the dummy data (y, x1, x2) for y = (2*x1) + (3*x2) + 4
// i.e. intercept should be 4, weights (2, 3)?
val data = sc.textFile("data/dummydata.txt")
// LabeledPoint is (label, [features])
val parsedData = data.map { line =>
val parts = line.split(',')
val label = parts(0).toDouble
val features = Array(parts(1), parts(2)) map (_.toDouble)
LabeledPoint(label, Vectors.dense(features))
}
// Scale the features
val scaler = new StandardScaler(withMean = true, withStd = true)
.fit(parsedData.map(x => x.features))
val scaledData = parsedData
.map(x =>
LabeledPoint(x.label,
scaler.transform(Vectors.dense(x.features.toArray))))
// Building the model: SGD = stochastic gradient descent
val numIterations = 1000
val step = 0.2
val model = LinearRegressionWithSGD.train(scaledData, numIterations, step)
println(s">>>> Model intercept: ${model.intercept}, weights: ${model.weights}")`
// Evaluate model on training examples
val valuesAndPreds = scaledData.map { point =>
val prediction = model.predict(point.features)
(point.label, point.features, prediction)
}
// Print out features, actual and predicted values...
valuesAndPreds.take(10).foreach({case (v, f, p) =>
println(s"Features: ${f}, Predicted: ${p}, Actual: ${v}")})
Run Code Online (Sandbox Code Playgroud)
Chr*_*ter 11
@Noah:谢谢 - 你的建议促使我再次看一下,我在这里找到了一些示例代码,允许你生成拦截,并通过优化器设置其他参数,例如迭代次数.
这是我修改后的模型生成代码,它似乎对我的虚拟数据运行正常:
// Building the model: SGD = stochastic gradient descent:
// Need to setIntercept = true, and seems only to work with scaled data
val numIterations = 600
val stepSize = 0.1
val algorithm = new LinearRegressionWithSGD()
algorithm.setIntercept(true)
algorithm.optimizer
.setNumIterations(numIterations)
.setStepSize(stepSize)
val model = algorithm.run(scaledData)
Run Code Online (Sandbox Code Playgroud)
它似乎仍然需要缩放数据而不是原始数据作为输入,但这对我的目的来说是可以的.
train
您正在使用的方法是将截距设置为零并且不尝试查找截距的快捷方式.如果使用基础类,则可以获得非零截距:
val model = new LinearRegressionWithSGD(step, numIterations, 1.0).
setIntercept(true).
run(scaledData)
Run Code Online (Sandbox Code Playgroud)
现在应该给你一个拦截.
归档时间: |
|
查看次数: |
6981 次 |
最近记录: |