如何使用Java 8 Date类和Jackson与Spark?

ale*_*bit 3 java scala jackson apache-spark

我有一个Spark 1.4.0项目,我正在尝试解析包含时间戳字段的几个JSON记录,并使用Jackson和JSR-310模块将其存储在ZonedDateTime对象中.如果我尝试从IDE(即,IntelliJ IDEA的14.0),它运行正常运行的驱动程序,但如果我用SBT装配和,然后我得到以下异常:spark-submit

15/07/16 14:13:03 ERROR Executor: Exception in task 3.0 in stage 0.0 (TID 3)
java.lang.AbstractMethodError: com.mycompany.input.EventParser$$anonfun$1$$anon$1.com$fasterxml$jackson$module$scala$experimental$ScalaObjectMapper$_setter_$com$fasterxml$jackson$module$scala$experimental$ScalaObjectMapper$$typeCache_$eq(Lorg/spark-project/guava/cache/LoadingCache;)V
    at com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper$class.$init$(ScalaObjectMapper.scala:50)
    at com.mycompany.input.EventParser$$anonfun$1$$anon$1.<init>(EventParser.scala:27)
    at com.mycompany.input.EventParser$$anonfun$1.apply(EventParser.scala:27)
    at com.mycompany.input.EventParser$$anonfun$1.apply(EventParser.scala:24)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:70)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)

我已经尝试了几个版本的装配,杰克逊和火花,但没有运气.我想这与某种程度上与spark和我的项目之间的依赖冲突有关(不知何故,与Guava库有关).有任何想法吗?

谢谢!

编辑:这里重现问题的示例项目.

Ali*_*iza 5

我有一个类似的问题,通过改变两件事来解决它:

1)我使用的ObjectMapper不是ScalaObjectMapper对这个SO问题的评论中的建议:使用带有ScalaObjectMapper的Jackson模块在Spark 1.4.0上运行作业时出错

2)我需要在map操作中定义mapper.

val alertsData = sc.textFile(rawlines).map(alertStr => {
      val mapper = new ObjectMapper()
      mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
      mapper.registerModule(DefaultScalaModule)
      broadcastVar.value.readValue(alertStr, classOf[Alert])
    })
Run Code Online (Sandbox Code Playgroud)

如果映射器在外部定义,我得到一个NullPointerException.也试图播放它,它没有用.

此外,没有必要明确添加杰克逊作为依赖,因为火花提供它.

希望这可以帮助.

阿里扎