Don*_*beo 2 scala apache-spark apache-spark-mllib
我有一个rowMatrix xw
scala> xw
res109: org.apache.spark.mllib.linalg.distributed.RowMatrix = org.apache.spark.mllib.linalg.distributed.RowMatrix@8e74950
Run Code Online (Sandbox Code Playgroud)
我想将一个函数应用于它的每个元素:
f(x)=exp(-x*x)
矩阵元素的类型可以看作:
scala> xw.rows.first
res110: org.apache.spark.mllib.linalg.Vector = [0.008930720313311474,0.017169380001300985,-0.013414238595719104,0.02239106636801034,0.023009502628798143,0.02891937604244297,0.03378470969100948,0.03644030110678057,0.0031586143217048825,0.011230244437457062,0.00477455053405408,0.020251682490519785,-0.005429788421130285,0.011578489275815267,0.0019301805575977788,0.022513736483645713,0.009475039307158668,0.019457912132044935,0.019209006632742498,-0.029811133879879596]
Run Code Online (Sandbox Code Playgroud)
我的主要问题是我不能在矢量上使用地图
scala> xw.rows.map(row => row.map(e => breeze.numerics.exp(e)))
<console>:44: error: value map is not a member of org.apache.spark.mllib.linalg.Vector
xw.rows.map(row => row.map(e => breeze.numerics.exp(e)))
^
scala>
Run Code Online (Sandbox Code Playgroud)
我怎么解决?
这假设你知道你实际上有一个DenseVector(似乎是这种情况).你可以叫toArray上向量,其中有一张地图,然后再转换回DenseVector用Vectors.dense:
xw.rows.map{row => Vectors.dense(row.toArray.map{e => breeze.numerics.exp(e)})}
你也可以这样做SparseVector; 它在数学上是正确的,但转换为数组可能效率极低.另一个选择是调用row.copy然后使用foreachActive,这对于密集和稀疏向量都是有意义的.但是copy可能没有为Vector您正在使用的特定类实现,如果您不知道向量的类型,则不能改变数据.如果你真的需要支持稀疏和密集向量,我会做类似的事情:
xw.rows.map{
case denseVec: DenseVector =>
Vectors.dense(denseVec.toArray.map{e => breeze.numerics.exp(e)})}
case sparseVec: SparseVector =>
//we only need to update values of the sparse vector -- the indices remain the same
val newValues: Array[Double] = sparseVec.values.map{e => breeze.numerics.exp(e)}
Vectors.sparse(sparseVec.size, sparseVec.indices, newValues)
}
Run Code Online (Sandbox Code Playgroud)