小编adl*_*z15的帖子

PySpark:Labeled Point RDD的许多功能

Spark新手,我读过的所有例子都涉及小数据集,例如:

RDD = sc.parallelize([
LabeledPoint(1, [1.0, 2.0, 3.0]),
LabeledPoint(2, [3.0, 4.0, 5.0]),
Run Code Online (Sandbox Code Playgroud)

但是,我有一个包含50多个功能的大型数据集.

行的示例

u'2596,51,3,258,0,510,221,232,148,6279,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,5'
Run Code Online (Sandbox Code Playgroud)

我想在PySpark中快速创建Labeledpoint RDD.我尝试将最后一个位置索引为Labeledpoint RDD中的第一个数据点,然后将前n-1个位置索引为密集向量.但是我收到以下错误.任何指导表示赞赏!注意:如果在创建标记点时将[]更改为(),则会收到错误"无效语法".

    df = myDataRDD.map(lambda line: line.split(','))
data = [
     LabeledPoint(df[54], df[0:53])
]
TypeError: 'PipelinedRDD' object does not support indexing
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-67-fa1b56e8441e> in <module>()
      2 df = myDataRDD.map(lambda line: line.split(','))
      3 data = [
----> 4      LabeledPoint(df[54], df[0:53])
      5 ]

TypeError: 'PipelinedRDD' object does not support indexing
Run Code Online (Sandbox Code Playgroud)

apache-spark rdd pyspark apache-spark-mllib

4
推荐指数
1
解决办法
9284
查看次数

标签 统计

apache-spark ×1

apache-spark-mllib ×1

pyspark ×1

rdd ×1