Cli*_*der 2 dictionary scala apache-spark
我使用的是Spark 1.3.0.
我的地图说它有密钥,但是在访问密钥时我找不到密钥或没有密钥.
import scala.collection.mutable.HashMap
val labeldata = sc.textFile("/home/data/trainLabels2.csv")
val labels: Array[Array[String]] = labeldata.map(line => line.split(",")).collect()
var fn2label: HashMap[String,Int] = new HashMap()
labels.foreach{ x => fn2label += (x(0) -> x(1).toInt)}
Run Code Online (Sandbox Code Playgroud)
我的地图看起来像:
scala> fn2label
res45: scala.collection.mutable.HashMap[String,Int] = Map("k2VDmKNaUlXtnMhsuCic" -> 1, "AGzOvc4dUfw1B8nDmY2X" -> 1, "BqRPMt4QY1sHzvF6JK7j" -> 3,.....
Run Code Online (Sandbox Code Playgroud)
它甚至有钥匙:
scala> fn2label.keys
res46: Iterable[String] = Set("k2VDmKNaUlXtnMhsuCic", "AGzOvc4dUfw1B8nDmY2X", "BqRPMt4QY1sHzvF6JK7j",
Run Code Online (Sandbox Code Playgroud)
但我无法访问它们:
scala> fn2label.get("k2VDmKNaUlXtnMhsuCic")
res48: Option[Int] = None
scala> fn2label("k2VDmKNaUlXtnMhsuCic")
java.util.NoSuchElementException: key not found: k2VDmKNaUlXtnMhsuCic
Run Code Online (Sandbox Code Playgroud)
我所尝试的包括广播地图,广播标签和地图,地图而不是HashMap,并行化,如/sf/answers/1731408731/所述
val mapRdd = sc.parallelize(fn2label.toSeq)
mapRdd.lookup("k2VDmKNaUlXtnMhsuCic")
res50: Seq[Int] = WrappedArray()
Run Code Online (Sandbox Code Playgroud)
我错过了什么?
您的数据中只有额外的引号:
scala> val fn2label = scala.collection.mutable.HashMap("\"k2VDmKNaUlXtnMhsuCic\"" -> 1, "\"AGzOvc4dUfw1B8nDmY2X\"" -> 1, "\"BqRPMt4QY1sHzvF6JK7j\"" -> 3)
fn2label: scala.collection.mutable.HashMap[String,Int] = Map("BqRPMt4QY1sHzvF6JK7j" -> 3, "AGzOvc4dUfw1B8nDmY2X" -> 1, "k2VDmKNaUlXtnMhsuCic" -> 1)
scala> fn2label.get("\"k2VDmKNaUlXtnMhsuCic\"")
res4: Option[Int] = Some(1)
scala> fn2label.keys
res5: Iterable[String] = Set("BqRPMt4QY1sHzvF6JK7j", "AGzOvc4dUfw1B8nDmY2X", "k2VDmKNaUlXtnMhsuCic")
Run Code Online (Sandbox Code Playgroud)