Ric*_*Liu 5 scala apache-spark rdd apache-spark-sql apache-spark-mllib
如果我有一个大约有500列和2亿行的RDD,并且RDD.columns.indexOf("target", 0)显示Int = 77了我的目标因变量是在第77列.但我对如何选择所需(部分)列作为特征没有足够的知识(比如说)我想要23到59,111到357,399到489的列.我想知道我是否可以申请:
val data = rdd.map(col => new LabeledPoint(
col(77).toDouble, Vectors.dense(??.map(x => x.toDouble).toArray))
Run Code Online (Sandbox Code Playgroud)
任何建议或指导将不胜感激.
也许我搞砸了RDD与DataFrame,我可以将RDD转换为DataFrame,.toDF()或者使用DataFrame比RDD更容易实现目标.
zer*_*323 13
我假设您的数据看起来或多或少像这样:
import scala.util.Random.{setSeed, nextDouble}
setSeed(1)
case class Record(
foo: Double, target: Double, x1: Double, x2: Double, x3: Double)
val rows = sc.parallelize(
(1 to 10).map(_ => Record(
nextDouble, nextDouble, nextDouble, nextDouble, nextDouble
))
)
val df = sqlContext.createDataFrame(rows)
df.registerTempTable("df")
sqlContext.sql("""
SELECT ROUND(foo, 2) foo,
ROUND(target, 2) target,
ROUND(x1, 2) x1,
ROUND(x2, 2) x2,
ROUND(x2, 2) x3
FROM df""").show
Run Code Online (Sandbox Code Playgroud)
所以我们有如下数据:
+----+------+----+----+----+
| foo|target| x1| x2| x3|
+----+------+----+----+----+
|0.73| 0.41|0.21|0.33|0.33|
|0.01| 0.96|0.94|0.95|0.95|
| 0.4| 0.35|0.29|0.51|0.51|
|0.77| 0.66|0.16|0.38|0.38|
|0.69| 0.81|0.01|0.52|0.52|
|0.14| 0.48|0.54|0.58|0.58|
|0.62| 0.18|0.01|0.16|0.16|
|0.54| 0.97|0.25|0.39|0.39|
|0.43| 0.23|0.89|0.04|0.04|
|0.66| 0.12|0.65|0.98|0.98|
+----+------+----+----+----+
Run Code Online (Sandbox Code Playgroud)
我们要忽略foo和x2和提取LabeledPoint(target, Array(x1, x3)):
// Map feature names to indices
val featInd = List("x1", "x3").map(df.columns.indexOf(_))
// Or if you want to exclude columns
val ignored = List("foo", "target", "x2")
val featInd = df.columns.diff(ignored).map(df.columns.indexOf(_))
// Get index of target
val targetInd = df.columns.indexOf("target")
df.rdd.map(r => LabeledPoint(
r.getDouble(targetInd), // Get target value
// Map feature indices to values
Vectors.dense(featInd.map(r.getDouble(_)).toArray)
))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
5785 次 |
| 最近记录: |