覆盖hadoop中的log4j.properties

RFT*_*RFT 15 hadoop log4j

如何覆盖hadoop中的默认log4j.properties?如果我设置hadoop.root.logger = WARN,控制台,它不会在控制台上打印日志,而我想要的是它不应该在日志文件中打印INFO.我在jar中添加了一个log4j.properties文件,但是我无法覆盖默认文件.简而言之,我希望日志文件只打印错误和警告.

# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log

#
# Job Summary Appender 
#
# Use following logger to send summary to separate file defined by 
# hadoop.mapreduce.jobsummary.log.file rolled daily:
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
# 
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log

# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter

# Logging Threshold
log4j.threshold=ALL

#
# Daily Rolling File Appender
#

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd

# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout

# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n


#
# console
# Add "console" to rootlogger above if you want to use this 
#

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n

#
# TaskLog Appender
#

#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12

log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}

log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n

#
#Security appender
#
hadoop.security.log.file=SecurityAuth.audit
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender 
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}

log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#new logger
# Define some default values that can be overridden by system properties
hadoop.security.logger=INFO,console
log4j.category.SecurityLogger=${hadoop.security.logger}

#
# Rolling File Appender
#

#log4j.appender.RFA=org.apache.log4j.RollingFileAppender
#log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Logfile size and and 30-day backups
#log4j.appender.RFA.MaxFileSize=1MB
#log4j.appender.RFA.MaxBackupIndex=30

#log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n

#
# FSNamesystem Audit logging
# All audit events are logged at INFO level
#
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=WARN

# Custom Logging levels

#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG

# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR

#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter

#
# Job Summary Appender
#
log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.JSA.DatePattern=.yyyy-MM-dd
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false

#
# MapReduce Audit Log Appender
#

# Set the MapReduce audit log filename
#hadoop.mapreduce.audit.log.file=hadoop-mapreduce.audit.log

# Appender for AuditLogger.
# Requires the following system properties to be set
#    - hadoop.log.dir (Hadoop Log directory)
#    - hadoop.mapreduce.audit.log.file (MapReduce audit log filename)

#log4j.logger.org.apache.hadoop.mapred.AuditLogger=INFO,MRAUDIT
#log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
#log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.MRAUDIT.File=${hadoop.log.dir}/${hadoop.mapreduce.audit.log.file}
#log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd
#log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
#log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
Run Code Online (Sandbox Code Playgroud)

Dan*_*iel 8

如果使用默认的Log4j.properties文件,则日志记录设置将被启动脚本中的环境变量覆盖.如果您想使用默认的log4j,只想更改日志记录级别,请使用$HADOOP_CONF_DIR/hadoop-env.sh

例如,要将记录器更改为DEBUG日志级别和DRFA记录器,请使用

export HADOOP_ROOT_LOGGER="DEBUG,DRFA"
Run Code Online (Sandbox Code Playgroud)


oer*_*ers 6

  1. 你可以删除log4j.properties你的hadoop jar
  2. 或者确保你的jar/ log4j.properties是类路径中的第一个(log4j log4j.properties从它找到的类路径中选择第一个)
  3. 或指定系统变量: -Dlog4j.configuration=PATH_TO_FILE

请参阅文档以了解log4j如何查找配置.


Tej*_*til 6

修改log4j里面的文件HADOOP_CONF_DIR.请注意,hadoop作业不会考虑应用程序的log4j文件.它将考虑内部的那个HADOOP_CONF_DIR.

如果要强制hadoop使用其他log4j文件,请尝试以下方法之一:

  1. 你可以试试@Patrice所说的.即.

    -Dlog4j.configuration =文件:/path/to/user_specific/log4j.xml

  2. 自定义HADOOP_CONF_DIR/log4j.xml并根据您的意愿设置"您的"类的记录器级别.其他用户不会因此而受到影响,除非两者都具有相同包结构的类.这不适用于核心hadoop类,因为所有用户都会被捕获.

  3. 创建自定义的log4j文件.复制目录HADOOP_CONF_DIR并将您的log4j文件放入其中.将HADOOP_CONF_DIR导出到您的conf目录.其他用户将指向默认用户.