os.stat返回st_mtime和st_ctime属性,修改时间是st_mtime,st_ctime是"更改时间"在POSIX上.是否有任何函数使用python并在Linux下返回文件的创建时间?
我很熟悉hadoop我跟着micheal安装(http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/).
当我运行命令/usr/local/hadoop/bin/start-all.sh时,通常会在机器上启动Namenode,Datanode,Jobtracker和Tasktracker.
我只得到这里启动的TaskTracker是跟踪:
hduser@srv591 ~ $ /usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-namenode-srv591.out
localhost: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-datanode-srv591.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-srv591.out
localhost: Exception in thread "main" org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /app/hadoop/tmp/dfs/namesecondary is in an inconsistent state: checkpoint directory does not exist or is not accessible.
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:729)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:208)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:150)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:676)
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-jobtracker-srv591.out
localhost: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hduser-tasktracker-srv591.out
hduser@srv591 ~ $ /usr/local/java/bin/jps
19469 TaskTracker
19544 Jps
Run Code Online (Sandbox Code Playgroud)
Tariq的解决方案有效,但仍然要在这里启动jobtracker和namenode是日志的内容 …
好吧,我正在寻找一个减少' '
字符串中多个空格字符的函数.
例如,s
给出的字符串:
s="hello__________world____!"
Run Code Online (Sandbox Code Playgroud)
该函数必须返回 "hello_world_!"
在python中我们可以通过regexp简单地完成它:
re.sub("\s+", " ", s);
Run Code Online (Sandbox Code Playgroud)