我正在尝试使用Spark MLlib(带Scala)对包含分类变量的数据集执行逻辑回归(LogisticRegressionWithLBFGS).我发现Spark无法使用这种变量.
在R中有一种处理这类问题的简单方法:我在因子(类别)中转换变量,因此R创建一组编码为{0,1}指标变量的列.
我怎样才能用Spark执行此操作?
scala bigdata categorical-data apache-spark apache-spark-mllib
我正在尝试读取一个Json文件,如:
[
{"IFAM":"EQR","KTM":1430006400000,"COL":21,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"31","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"5","up":null,"Crate":"2"}
,{"MLrate":"34","Nrout":"0","up":null,"Crate":"4"}
,{"MLrate":"33","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"8","up":null,"Crate":"2"}
]}
,{"IFAM":"EQR","KTM":1430006400000,"COL":22,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"0","up":null,"Crate":"0"}
,{"MLrate":"35","Nrout":"1","up":null,"Crate":"5"}
,{"MLrate":"30","Nrout":"6","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"38","Nrout":"8","up":null,"Crate":"1"}
]}
,...
]
Run Code Online (Sandbox Code Playgroud)
我试过这个命令:
val df = sqlContext.read.json("namefile")
df.show()
Run Code Online (Sandbox Code Playgroud)
但这不起作用:我的专栏无法识别......