我是 nodejs 的新手,我编写了一个 nodejs 程序并使用 node-schedule 每分钟调度一次。但是在运行一段时间并在控制台生成几个日志后,nodejs 抛出错误this.job.execute is not a function
这是我正在使用的代码:
var nodeSchedule = require('node-schedule');
runJob();
function runJob(){
console.log("start");
nodeSchedule.scheduleJob('0 * * * * *',require('./prodModules.js'));
}
Run Code Online (Sandbox Code Playgroud)
我得到的日志是:
C:\Users\1060641\Downloads\NodeJS HealthReport\Collector>node src\main\nodejs\collector_main.js
start
Connected
Ready
logged in as Super User
nfs_check running...
NFS Check completed
snapchart_check running...
C:\Users\1060641\node_modules\node-schedule\lib\schedule.js:177
this.job.execute();
^
TypeError: this.job.execute is not a function
at Job.invoke (C:\Users\1060641\node_modules\node-schedule\lib\schedule.js:177:14)
at null._onTimeout (C:\Users\1060641\node_modules\node-schedule\lib\schedule.js:445:11)
at Timer.listOnTimeout (timers.js:92:15)
C:\Users\1060641\Downloads\NodeJS HealthReport\Collector>
Run Code Online (Sandbox Code Playgroud)
我不认为我有什么问题,prodModules.js因为它独立运行它运行良好。调度正在抛出错误。
请帮忙。
我正在整合Kafka和Spark,使用spark-streaming.我创建了一个作为kafka制作人的主题:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Run Code Online (Sandbox Code Playgroud)
我正在kafka发布消息并尝试使用spark-streaming java代码读取它们并在屏幕上显示它们.
守护进程全都出现了:Spark-master,worker; 动物园管理员; 卡夫卡.
我正在编写一个java代码,使用KafkaUtils.createStream
代码如下:
public class SparkStream {
public static void main(String args[])
{
if(args.length != 3)
{
System.out.println("SparkStream <zookeeper_ip> <group_nm> <topic1,topic2,...>");
System.exit(1);
}
Map<String,Integer> topicMap = new HashMap<String,Integer>();
String[] topic = args[2].split(",");
for(String t: topic)
{
topicMap.put(t, new Integer(1));
}
JavaStreamingContext jssc = new JavaStreamingContext("spark://192.168.88.130:7077", "SparkStream", new Duration(3000));
JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(jssc, args[0], args[1], topicMap );
System.out.println("Connection done++++++++++++++");
JavaDStream<String> data = messages.map(new Function<Tuple2<String, String>, …Run Code Online (Sandbox Code Playgroud) 我正在使用spark-streaming阅读kafka流媒体消息.现在我想将Cassandra设置为输出.我在cassandra"test_table"中创建了一个表,其中列为"key:text primary key"和"value:text"我已成功将数据映射到JavaDStream<Tuple2<String,String>> data这样:
JavaSparkContext sc = new JavaSparkContext("local[4]", "SparkStream",conf);
JavaStreamingContext jssc = new JavaStreamingContext(sc, new Duration(3000));
JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(jssc, args[0], args[1], topicMap );
JavaDStream<Tuple2<String,String>> data = messages.map(new Function< Tuple2<String,String>, Tuple2<String,String> >()
{
public Tuple2<String,String> call(Tuple2<String, String> message)
{
return new Tuple2<String,String>( message._1(), message._2() );
}
}
);
Run Code Online (Sandbox Code Playgroud)
然后我创建了一个List:
List<TestTable> list = new ArrayList<TestTable>();
Run Code Online (Sandbox Code Playgroud)
其中TestTable是我的自定义类,具有与我的Cassandra表相同的结构,其成员为"key"和"value":
class TestTable
{
String key;
String val;
public TestTable() {}
public TestTable(String k, String v)
{
key=k;
val=v;
}
public String …Run Code Online (Sandbox Code Playgroud) 当我在mapreduce模式下运行pig时,我收到ConnectionRefused错误.
详细信息:
我已经从tarball(pig-0.14)安装了Pig,并在bashrc中导出了类路径.
我已经启动并运行了所有Hadoop(hadoop-2.5)守护进程(由JPS确认).
[root@localhost sbin]# jps
2272 Jps
2130 DataNode
2022 NameNode
2073 SecondaryNameNode
2238 NodeManager
2190 ResourceManager
Run Code Online (Sandbox Code Playgroud)
我在mapreduce模式下运行pig:
[root@localhost sbin]# pig
grunt> file = LOAD '/input/pig_input.csv' USING PigStorage(',') AS (col1,col2,col3);
grunt> dump file;
Run Code Online (Sandbox Code Playgroud)
然后我收到错误:
java.io.IOException: java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:334)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:419)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:532)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:183)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getTaskReports(HadoopShims.java:231)
at org.apache.pig.tools.pigstats.mapreduce.MRJobStats.addMapReduceStatistics(MRJobStats.java:352)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.addSuccessJobStats(MRPigStatsUtil.java:233)
at org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil.accumulateStats(MRPigStatsUtil.java:165)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:360)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
at …Run Code Online (Sandbox Code Playgroud) 我有一个文件,其中包含文本和"^"之间的数据:
一些文字^在这里走了^
并且很少^在
这里
我正在编写一个自定义输入格式来使用"^"字符分隔行.即映射器的输出应该是:
有些文字在
这里
,还有
更多的东西在这里
我编写了一个自定义输入格式,它扩展了FileInputFormat,还编写了一个扩展RecordReader的自定义记录阅读器.我的自定义记录阅读器的代码如下.我不知道如何继续这个代码.在WHILE循环部分中遇到了nextKeyValue()方法的问题.我应该如何从拆分中读取数据并生成自定义键值?我正在使用所有新的mapreduce包而不是旧的mapred包.
public class MyRecordReader extends RecordReader<LongWritable, Text>
{
long start, current, end;
Text value;
LongWritable key;
LineReader reader;
FileSplit split;
Path path;
FileSystem fs;
FSDataInputStream in;
Configuration conf;
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext cont) throws IOException, InterruptedException
{
conf = cont.getConfiguration();
split = (FileSplit)inputSplit;
path = split.getPath();
fs = path.getFileSystem(conf);
in = fs.open(path);
reader = new LineReader(in, conf);
start = split.getStart();
current = start;
end = split.getLength() + start;
} …Run Code Online (Sandbox Code Playgroud) 我试图使用Node.js将数据写入MongoDB在写入数据时,我在最后一行收到以下错误.执行日志是:
{ _id: 56e90c1292e69900190954f5,
nfs: [ 'ebdp1', 'ebdp2', 'ebdp3', 'ebdp4' ],
snapShotTime: '2016-03-16 07:32:34' }
{ [MongoError: topology was destroyed] name: 'MongoError', message: 'topology was destroyed' }
Run Code Online (Sandbox Code Playgroud)
我的代码结构是根据框架,其中Collection的架构在appTableProdSchema.js文件中,而Object数据在appTableProdData.js文件中.Main是newMain.js
代码如下:newMain.js
var mongoose = require('mongoose');
var moment = require('moment');
var nfs_check="";
var promises = [];
var nodes = ["ebdp1","ebdp2", "ebdp3", "ebdp4"];
mongoose.connect('mongodb://localhost:27017/test');
var db = mongoose.connection;
var storageData = require('./appTableProdData.js');
var storageDataSchema = require('./appTableProdSchema.js');
var obj = {};
obj.snapShotTime = moment().utc().format("YYYY-MM-DD HH:mm:ss");
obj.nfs = nodes;
db.once('open', function() { …Run Code Online (Sandbox Code Playgroud) 我正在使用angularJS并尝试在循环中添加bootstrap可折叠面板.
我已经编写了下面的代码,但body所有面板中的代码都显示在第一个面板标题下.
我需要在body相应的面板标题下显示.
我相信这种情况正在发生,因为<div id="myButton">无论何时发生点击,都会在循环中调用所有后续面板<a href="#myButton">.
有没有办法可以使用变量值来设置ID?
像: <a href="#{{variable}}">和<div id="{{variable}}"> ?
我写的代码:
<div ng-repeat="ownr in meldHealth.ownerList | orderBy: 'ownr'">
<div class="panel-group">
<div class="panel panel-default">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" href="#myButton" ng-click="getApps(ownr)">{{ownr}}</a>
</h4>
</div>
<div id="myButton" class="panel-collapse collapse">
<div class="panel-body">
<div class="col-xs-12">
<span ng-repeat="word in ownerApps | orderBy: 'word'">
<button class="btn btn-default text-blue" data-toggle="modal" href="#myModal" type="button"
ng-click="getDetails(word.name, currOwner)" style="width:250px;">
{{word.name}}
</button>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
Run Code Online (Sandbox Code Playgroud) hadoop ×3
javascript ×3
node.js ×2
angularjs ×1
apache-kafka ×1
apache-pig ×1
apache-spark ×1
cassandra ×1
express ×1
hadoop2 ×1
html ×1
jquery ×1
mapreduce ×1
mongodb ×1
npm ×1