我已经阅读了其他相关问题,但我没有得到答案.
码:
inputType.zip(inputColName).zipWithIndex.map {
case (inputType, inputColName, idx) =>
inputType match {
case **DoubleType** => println("test1")
case _ => println('test2 ')
}
}
Run Code Online (Sandbox Code Playgroud)
DoubleType模式类型与预期类型不兼容.找到DoubleType.type.必需:(DataType,String).
我尝试了两个简化版本,语法看起来正确.
List(1,2,3).zip(List(4,5,6)).map { case(a, b) =>
a match {case 1 => println(s"First is $a, second is $b")
case _ => println("test")}}
Run Code Online (Sandbox Code Playgroud)
以下也有效
inputType.zipWithIndex.map {
case (inputType, idx) =>
inputType match {
case DoubleType => println("test1")
case _ => println('test2 ')
}
}
Run Code Online (Sandbox Code Playgroud)
我不明白为什么zip添加后,为什么我有这种模式匹配类型错误.
你错过组的inputType和inputColName作为tuple2
inputType.zip(inputColName).zipWithIndex.map {
case ((inputType, inputColName), idx) =>
inputType match {
case DoubleType => println("test1")
case _ => println("test2")
}
}
Run Code Online (Sandbox Code Playgroud)
当你使用zipas时
inputType.zip(inputColName)
Run Code Online (Sandbox Code Playgroud)
那么Scala编译器会将其视为
List[(org.apache.spark.sql.types.NumericType with Product with Serializable, String)]
Run Code Online (Sandbox Code Playgroud)
而当你添加.zipWithIndex随后Scala编译器将它读成
List[((org.apache.spark.sql.types.NumericType with Product with Serializable, String), Int)]
Run Code Online (Sandbox Code Playgroud)
问题
当你定义的情况下作为case(inputType, inputColName, idx)然后Scala编译器将视inputType作为(org.apache.spark.sql.types.NumericType with Product with Serializable, String)和inputColName作为Int该的((org.apache.spark.sql.types.NumericType with Product with Serializable, String), Int) 数据类型创建时形成的inputType.zip(inputColName).zipWithIndex.所以idx从未被发现.
即使你做到以下几点不idx那么也是其有效的(现在inputType的case被视为(org.apache.spark.sql.types.NumericType with Product with Serializable, String))
inputType.zip(inputColName).zipWithIndex.map {
case (inputType, inputColName) =>
inputType match {
case (DoubleType, "col1") => println("test1")
case _ => println("test2")
}
}
Run Code Online (Sandbox Code Playgroud)
我希望解释清楚.