我正在尝试react-router为我的第一个React webapp 设置一个,它似乎正在工作,除了我刷新页面时我的嵌套页面没有加载css.
但它只适用于一个级别,/dashboard但css不会加载/components/timer
这是我的index.jsx文件的样子
import './assets/plugins/morris/morris.css';
import './assets/css/bootstrap.min.css';
import './assets/css/core.css';
import './assets/css/components.css';
import './assets/css/icons.css';
import './assets/css/pages.css';
import './assets/css/menu.css';
import './assets/css/responsive.css';
render(
<Router history={browserHistory}>
<Route path="/" component={Dashboard}/>
<Route path="/components/:name" component={WidgetComponent}/>
<Route path="*" component={Dashboard}/>
</Router>,
document.getElementById('root')
);
Run Code Online (Sandbox Code Playgroud)
知道为什么吗?
运行几个小时后,我不确定导致此异常运行我的Spark作业的原因是什么.
我正在运行Spark 2.0.2
有没有调试提示?
2016-12-27 03:11:22,199 [shuffle-server-3] ERROR org.apache.spark.network.server.TransportRequestHandler - Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:154)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:134)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:180)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:109)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEve
Run Code Online (Sandbox Code Playgroud) 我正在Docker容器中部署一个app,它在启动时随机分配端口.问题是我想使用docker-compose但有没有办法使用docker-compose公开服务的所有端口?没有docker-compose,我会用docker run ... -P
谢谢
我在同一台机器上设置了我的纱线簇和我的火花簇,但现在我需要使用客户端模式运行带有纱线的火花作业.
这是我的工作示例配置:
SparkConf sparkConf = new SparkConf(true).setAppName("SparkQueryApp")
.setMaster("yarn-client")// "yarn-cluster" or "yarn-client"
.set("es.nodes", "10.0.0.207")
.set("es.nodes.discovery", "false")
.set("es.cluster", "wp-es-reporting-prod")
.set("es.scroll.size", "5000")
.setJars(JavaSparkContext.jarOfClass(Demo.class))
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.default.parallelism", String.valueOf(cpus * 2))
.set("spark.executor.memory", "10g")
.set("spark.num.executors", "40")
.set("spark.dynamicAllocation.enabled", "true")
.set("spark.dynamicAllocation.minExecutors", "10")
.set("spark.dynamicAllocation.maxExecutors", "50") .set("spark.logConf", "true");
Run Code Online (Sandbox Code Playgroud)
当我尝试运行我的Spark工作时,这似乎不起作用
java -jar spark-test-job.jar"
我得到了这个例外
405472 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to
server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
sleepTime=1 SECONDS)
406473 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to
server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用Apache spark在Elasticsearch中查询我的数据,但我的Spark工作大约花了20个小时来进行聚合并仍在运行.ES中的相同查询大约需要6秒.
我知道数据必须从Elasticsearch集群转移到我的spark集群,并且一些数据在Spark中进行混洗.
我的ES索引中的数据是大约.3亿个文档,每个文档有大约400个字段(1.4Terrabyte).
我有一个3节点火花簇(1个主站,2个工作站),总共有60GB内存和8个核心.
运行所需的时间是不可接受的,有没有办法让我的火花作业运行得更快?
这是我的火花配置:
SparkConf sparkConf = new SparkConf(true).setAppName("SparkQueryApp")
.setMaster("spark://10.0.0.203:7077")
.set("es.nodes", "10.0.0.207")
.set("es.cluster", "wp-es-reporting-prod")
.setJars(JavaSparkContext.jarOfClass(Demo.class))
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.default.parallelism", String.valueOf(cpus * 2))
.set("spark.executor.memory", "8g");
Run Code Online (Sandbox Code Playgroud)
编辑
SparkContext sparkCtx = new SparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(sparkCtx);
DataFrame df = JavaEsSparkSQL.esDF(sqlContext, "customer-rpts01-201510/sample");
DataFrame dfCleaned = cleanSchema(sqlContext, df);
dfCleaned.registerTempTable("RPT");
DataFrame sqlDFTest = sqlContext.sql("SELECT agent, count(request_type) FROM RPT group by agent");
for (Row row : sqlDFTest.collect()) {
System.out.println(">> " + row);
}
Run Code Online (Sandbox Code Playgroud) 我在一个独立的集群上运行Spark工作,我注意到GC开始用了很长时间后开始出现红色可怕的颜色.
以下是可用资源:
Cores in use: 80 Total, 76 Used
Memory in use: 312.8 GB Total, 292.0 GB Used
Run Code Online (Sandbox Code Playgroud)
工作细节:
spark-submit --class com.mavencode.spark.MonthlyReports
--master spark://192.168.12.14:7077
--deploy-mode cluster --supervise
--executor-memory 16G --executor-cores 4
--num-executors 18 --driver-cores 8
--driver-memory 20G montly-reports-assembly-1.0.jar
Run Code Online (Sandbox Code Playgroud)
如何修复GC时间这么长时间?
我正在探索DSC并想知道将DSC资源复制到目标主机的最佳方法是什么?
当我尝试将配置推送到目标主机时,它抱怨缺少DSC资源.
The PowerShell DSC resource xWebAdministration does not exist at the PowerShell module path nor is it registered as a WMI DSC resource.
+ CategoryInfo : InvalidOperation: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : DscResourceNotFound
+ PSComputerName : server1.appman.net
Run Code Online (Sandbox Code Playgroud) 我正在将数据(约83M记录)从数据帧写入postgresql,这有点慢.花费2.7小时完成对db的写入.
查看执行程序,只有一个执行程序只运行一个活动任务.有没有什么办法可以使用Spark中的所有执行程序将写入并行化到db中?
...
val prop = new Properties()
prop.setProperty("user", DB_USER)
prop.setProperty("password", DB_PASSWORD)
prop.setProperty("driver", "org.postgresql.Driver")
salesReportsDf.write
.mode(SaveMode.Append)
.jdbc(s"jdbc:postgresql://$DB_HOST:$DB_PORT/$DATABASE", REPORTS_TABLE, prop)
Run Code Online (Sandbox Code Playgroud)
谢谢
我在我的iOS应用程序中运行一个本地网络服务器,它为AngularJS应用程序提供服务,但我不知道如何在WKWebView和我的控制器之间进行通信.
在iOS中, viewDidLoad
WKWebViewConfiguration *wkWebConfig = [[WKWebViewConfiguration alloc] init];
[wkWebConfig.userContentController addScriptMessageHandler:self name:@"interOp"];
self.webview = [[WKWebView alloc] initWithFrame:self.view.frame ];
[self.webview setBackgroundColor:self.view.backgroundColor];
[self.webview loadRequest:request];
Run Code Online (Sandbox Code Playgroud)
从angularJS控制器,我需要获取wkwebview对象,但未定义:
$scope.showSettings = function(){
//this is undefined
window.webkit.messageHandlers.interOp.postMessage(message)
}
Run Code Online (Sandbox Code Playgroud)
如果我在head我的网页标签中尝试它,它完全正常.
<script>
window.webkit.messageHandlers.interOp.postMessage(message)
</script>
Run Code Online (Sandbox Code Playgroud)
我究竟做错了什么 ?
我想知道 Spark 执行器内存是否有大小限制?
考虑运行一个糟糕的工作来进行收集、联合、计数等的情况。
只是一点背景信息,假设我有这些资源(2 台机器)
Cores: 40 cores, Total = 80 cores
Memory: 156G, Total = 312
Run Code Online (Sandbox Code Playgroud)
更大和更小的执行者有什么建议?
在 Elasticsearch 中,我的字段之一是日期,我使用与我的日期匹配的自定义日期格式定义映射。
但是,在某些情况下,我的日期字段的值只是一个空字符串"LastUpdateDate": ""并导致异常。如何处理日期字段映射中的空字符串?
Unexpected error: (<class 'elasticsearch.helpers.BulkIndexError'>, BulkIndexError(u'1 document(s) failed to index.', [{u'create':
{u'status': 400, u'_type': u'songs', u'_id': u'AVNtiXgTC4kaHLfuKAJA', u'error': {u'caused_by': {u'reason': u'Invalid format: ""',
u'type': u'illegal_argument_exception'}, u'reason': u'failed to parse [LastUpdateDate]', u'type': u'mapper_parsing_exception'},
u'_index': u'album-032016'}}]), <traceback object at 0x7fba4395c1b8>)
Run Code Online (Sandbox Code Playgroud) 似乎是 Scala 中的一个错误,它允许您更改方法变量名称。
在此示例中,编译器不应允许name在同一方法块中再次声明该参数。
object App {
def main(args: Array[String]): Unit = {
testMethod()
}
def testMethod(name: String = "John Smith"): Unit = {
val name = "John Doe"
println(name)
}
}
Run Code Online (Sandbox Code Playgroud)
对方法变量名的变异有什么解释吗?
输出
John Doe
Run Code Online (Sandbox Code Playgroud) apache-spark ×7
scala ×4
amazon-sqs ×1
angularjs ×1
css ×1
databricks ×1
dataframe ×1
docker ×1
dsc ×1
hadoop-yarn ×1
ios ×1
java ×1
javascript ×1
jvm ×1
react-router ×1
reactjs ×1
wkwebview ×1