小编Ale*_*nin的帖子

为什么SimpleDateFormat不会为无效格式抛出异常?

import java.text.ParseException;

public class Hello {

    public static void main(String[] args) throws ParseException {
        System.out.println(new java.text.SimpleDateFormat("yyyy-MM-dd").parse("23-06-2015"));
    }
}
Run Code Online (Sandbox Code Playgroud)

为什么这会回来Sun Dec 05 00:00:00 GMT 28我期待一个例外.

java parsing simpledateformat

9
推荐指数
1
解决办法
1527
查看次数

Mesos上Apache Spark的自定义状态存储提供程序

我为Apache Spark 2.3.0编写了一个自定义状态存储和状态存储提供程序,并尝试使用附加参数部署该作业:

--conf spark.sql.streaming.stateStore.providerClass=com.sample.state.CustomStateStoreProvider
Run Code Online (Sandbox Code Playgroud)

对于运行Spark作业,我使用Marathon和Mesos,并且在启动异常后作业失败:

java.lang.ClassNotFoundException: com.sample.state.CustomStateStoreProvider 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.spark.util.Utils$.classForName(Utils.scala:235)
    at org.apache.spark.sql.execution.streaming.state.StateStoreProvider$.create(StateStore.scala:213)
    at org.apache.spark.sql.execution.streaming.StateStoreWriter$class.stateStoreCustomMetrics(statefulOperators.scala:121)
    at org.apache.spark.sql.execution.streaming.StateStoreWriter$class.metrics(statefulOperators.scala:86)
    at org.apache.spark.sql.execution.streaming.StateStoreSaveExec.metrics$lzycompute(statefulOperators.scala:251)
    at org.apache.spark.sql.execution.streaming.StateStoreSaveExec.metrics(statefulOperators.scala:251)
    at org.apache.spark.sql.execution.SparkPlanInfo$.fromSparkPlan(SparkPlanInfo.scala:58)
    at org.apache.spark.sql.execution.SparkPlanInfo$$anonfun$fromSparkPlan$1.apply(SparkPlanInfo.scala:62)
    at org.apache.spark.sql.execution.SparkPlanInfo$$anonfun$fromSparkPlan$1.apply(SparkPlanInfo.scala:62)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:285)
    at org.apache.spark.sql.execution.SparkPlanInfo$.fromSparkPlan(SparkPlanInfo.scala:62)
    at org.apache.spark.sql.execution.SparkPlanInfo$$anonfun$fromSparkPlan$1.apply(SparkPlanInfo.scala:62)
    at org.apache.spark.sql.execution.SparkPlanInfo$$anonfun$fromSparkPlan$1.apply(SparkPlanInfo.scala:62)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:285)
    at org.apache.spark.sql.execution.SparkPlanInfo$.fromSparkPlan(SparkPlanInfo.scala:62)
    at org.apache.spark.sql.execution.SparkPlanInfo$$anonfun$fromSparkPlan$1.apply(SparkPlanInfo.scala:62)
    at org.apache.spark.sql.execution.SparkPlanInfo$$anonfun$fromSparkPlan$1.apply(SparkPlanInfo.scala:62)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:285)
    at …
Run Code Online (Sandbox Code Playgroud)

mesos apache-spark spark-structured-streaming

5
推荐指数
1
解决办法
334
查看次数

如何在Spring Boot应用程序中排除嵌入式Tomcat

我们如何从Spring Boot应用程序中排除嵌入式Tomcat服务器,以便可以在JBoss Server上运行该jar?

spring-boot

4
推荐指数
2
解决办法
1万
查看次数

通过迭代Scala列名列表中的列,从Spark数据框中删除多个列

我有一个数据框,其列数约为400,我想根据我的要求删除100列.所以我创建了一个包含100个列名的Scala列表.然后我想迭代一个for循环来实际删除每个for循环迭代中的列.

下面是代码.

final val dropList: List[String] = List("Col1","Col2",...."Col100”)

def drpColsfunc(inputDF: DataFrame): DataFrame = { 
    for (i <- 0 to dropList.length - 1) {
        val returnDF = inputDF.drop(dropList(i))
    }
    return returnDF
}

val test_df = drpColsfunc(input_dataframe) 

test_df.show(5)
Run Code Online (Sandbox Code Playgroud)

scala apache-spark apache-spark-sql

3
推荐指数
3
解决办法
2万
查看次数

如何使用Curator框架获取Zookeeper节点的统计信息

我在java中使用curator框架与ZNodes进行交互.我们如何得到节点统计数据,如last_modified时间和创建时间等.我可以使用kazoo框架在python中做同样的事情.

from kazoo.client import KazooClient

zk_client = KazooClient(hosts='127.0.0.1:2181')
zk_client.start()
data, stat = zk_client.get("/my/favorite")
Run Code Online (Sandbox Code Playgroud)

参考.链接:kazoo

我尝试通过策展人搜索类似的支持,但无法获得结果.请在这里帮忙.谢谢.

java apache-zookeeper apache-curator

2
推荐指数
1
解决办法
1060
查看次数