我有一个存储桶/文件夹,每隔几分钟就会有很多文件进入.如何根据文件时间戳只读取新文件.
例如:列出所有带有时间戳> my_timestamp的文件
我不小心覆盖了.ssh / authorized_keys中的条目。现在,我不再能够使用.pem文件连接到我的EC2实例。我试图生成一个新的.pem文件,希望该过程将条目添加到.ssh / authorized_keys中,但是没有。我试图阅读文档,但是这对我来说有点混乱。有人可以对此进行简化的解释/说明,对此深表感谢。
我正在尝试按照本教程在Google容器引擎中创建"Hello Node"示例应用程序
但是,即使在运行命令之后kubectl expose rc hello-node --type="LoadBalancer",也没有公开外部IP来访问端口.
vagrant@docker-host:~/node-app$ kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node:v1 --port=8080
replicationcontroller "hello-node" created
vagrant@docker-host:~/node-app$ kubectl expose rc hello-node --type="LoadBalancer"
service "hello-node" exposed
vagrant@docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 14s
vagrant@docker-host:~/node-app$ kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.163.248.xxx 8080/TCP run=hello-node 23s
Run Code Online (Sandbox Code Playgroud) docker google-cloud-platform kubernetes google-kubernetes-engine
我正在运行另一个用户的ansible-playbook模板,该模板有一个创建新用户的任务: -
---
- name: Create the application user
user: name={{ gunicorn_user }} state=present
- name: Create the application group
group: name={{ gunicorn_group }} system=yes state=present
- name: Add the application user to the application group
user: name={{ gunicorn_user }} group={{ gunicorn_group }} state=present
Run Code Online (Sandbox Code Playgroud)
此处没有为此用户设置密码.新用户在运行playbook后在系统中创建.但是当我尝试使用新创建的用户登录时,它要求输入密码?
基本上,我的目的是了解如何/为什么要求输入密码,而不是使用新帐户登录?因为我在创建用户时没有指定密码.
我签了/ etc/passwd: -
表明 youtubeadl:x:1003:999::/home/youtubeadl:
youtubeadl 是新用户创建的
如果我将一个flatMap应用于JSONArray到JSONObject,我会收到错误如果我从eclipse运行我的本地(笔记本电脑),它运行正常,但是当在集群(YARN)上运行时,它会产生奇怪的错误.Spark版本2.0.0
码:-
JavaRDD<JSONObject> rdd7 = rdd6.flatMap(new FlatMapFunction<JSONArray, JSONObject>(){
@Override
public Iterable<JSONObject> call(JSONArray array) throws Exception {
List<JSONObject> list = new ArrayList<JSONObject>();
for (int i = 0; i < array.length();list.add(array.getJSONObject(i++)));
return list;
}
});
Run Code Online (Sandbox Code Playgroud)
错误日志: -
java.lang.AbstractMethodError: com.pwc.spark.tifcretrolookup.TIFCRetroJob$2.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30)
at com.pwc.spark.ElasticsearchClientLib.CommonESClient.index(CommonESClient.java:33)
at com.pwc.spark.ElasticsearchClientLib.ESClient.call(ESClient.java:34)
at com.pwc.spark.ElasticsearchClientLib.ESClient.call(ESClient.java:15)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:218)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at …Run Code Online (Sandbox Code Playgroud) 我只是使用Ansible提供的示例通过剧本学习Ansible。 https://github.com/ansible/ansible-examples/tree/master/lamp_simple
当我尝试在剧本的开头放置调试消息时,出现以下错误。
vagrant@packer-debian-7:~/ansible-examples-master/lamp_simple$ ansible-playbook -i hosts site.yml --private-key=~/.ssh/google_compute_engine -vvvv
ERROR: debug is not a legal parameter at this level in an Ansible Playbook
Run Code Online (Sandbox Code Playgroud)
[site.yml]
---
# This playbook deploys the whole application stack in this site.
- debug: msg="Start KickAsssss"
- name: apply common configuration to all nodes
hosts: all
roles:
- common
- name: configure and deploy the webservers and application code
hosts: webservers
roles:
- web
- name: deploy MySQL and configure the databases
hosts: dbservers
roles: …Run Code Online (Sandbox Code Playgroud) 我正在尝试在GCE中自动创建VM。我想通过我的VM SSH并直接通过gcloud控制台执行命令。
> gcloud compute ssh <instance>会打开一个新的ssh窗口,而不是我想直接从gcloud控制台在VM内部执行shell命令,而不重定向到新的ssh窗口。
提前致谢
I have a scala val function as follows :
val getTimestampEpochMillis = (year:Int, month:Int, day:Int, hour:Int, quarterHour:Int, minute:Int, seconds:Int) => {
var tMinutes = minute
var tSeconds = seconds
if(minute == 0){
if(quarterHour == 1){
tMinutes = 22
tSeconds = 30
}else if(quarterHour == 2){
tMinutes = 37
tSeconds = 30
}else if(quarterHour == 3){
tMinutes = 52
tSeconds = 30
}else if(quarterHour == 0){
tMinutes = 7
tSeconds = 30
}
}
val localDateTime = LocalDateTime.of(year, month, day, hour, …Run Code Online (Sandbox Code Playgroud) 我正在使用Titan 0.4 + Cassandra。我的用例需要一次插入多个顶点。(aprrox批处理大小一次是100个顶点。)例如:
v01 = g.addVertex(["UC":"B","i":2]); v02 = g.addVertex(["UC":"H","i":1])
v03 = g.addVertex(["LC":"a"]); v04 = g.addVertex(["LC":"a"]);
v05 = g.addVertex(["LC":"d"]); v06 = g.addVertex(["LC":"h"]);
v07 = g.addVertex(["LC":"i"]); v08 = g.addVertex(["LC":"p"]);
Run Code Online (Sandbox Code Playgroud)
是否有任何gremlin命令添加全部Eight vertices in a single request。(类似 g.addVertices()??)
ansible ×2
amazon-ec2 ×1
apache-spark ×1
docker ×1
gremlin ×1
gsutil ×1
java ×1
kubernetes ×1
scala ×1
ssh ×1
titan ×1