小编RFT*_*RFT的帖子

使用gson解析嵌套的JSON

{
    "Response": {
        "MetaInfo": {
            "Timestamp": "2011-11-21T14:55:06.556Z"
        },
        "View": [
            {
                "_type": "SearchResultsViewType",
                "ViewId": 0,
                "Result": [
                    {
                        "Relevance": 0.56,
                        "MatchQuality": {
                            "Country": 1,
                            "State": 1,
                            "County": 1,
                            "City": 1,
                            "PostalCode": 1
                        },
                        "Location": {
                            "LocationType": "point",
                            "DisplayPosition": {
                                "Latitude": 50.1105,
                                "Longitude": 8.684
                            },
                            "MapView": {
                                "_type": "GeoBoundingBoxType",
                                "TopLeft": {
                                    "Latitude": 50.1194932,
                                    "Longitude": 8.6699768
                                },
                                "BottomRight": {
                                    "Latitude": 50.1015068,
                                    "Longitude": 8.6980232
                                }
                            },
                            "Address": {
                                "Country": "DEU",
                                "State": "Hessen",
                                "County": "Frankfurt am Main",
                                "City": "Frankfurt am Main",
                                "District": "Frankfurt am …
Run Code Online (Sandbox Code Playgroud)

java json gson

37
推荐指数
2
解决办法
3万
查看次数

覆盖hadoop中的log4j.properties

如何覆盖hadoop中的默认log4j.properties?如果我设置hadoop.root.logger = WARN,控制台,它不会在控制台上打印日志,而我想要的是它不应该在日志文件中打印INFO.我在jar中添加了一个log4j.properties文件,但是我无法覆盖默认文件.简而言之,我希望日志文件只打印错误和警告.

# Define some default values that can be overridden by system properties
hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log

#
# Job Summary Appender 
#
# Use following logger to send summary to separate file defined by 
# hadoop.mapreduce.jobsummary.log.file rolled daily:
# hadoop.mapreduce.jobsummary.logger=INFO,JSA
# 
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log

# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter

# Logging Threshold
log4j.threshold=ALL

#
# Daily Rolling File Appender
#

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}

# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd

# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout …
Run Code Online (Sandbox Code Playgroud)

hadoop log4j

15
推荐指数
3
解决办法
4万
查看次数

如何为hadoop输出文件提供自定义名称

我希望输出文件的格式为2012117-part-r-00000.基本上我希望输出文件附加日期,以便我可以根据日期排列文件.我查看了OutputFormat和FileOutputFormat,但它对我的情况没有帮助.

hadoop

11
推荐指数
3
解决办法
1万
查看次数

多线程httpClient

public class test {
    public static final int nThreads = 2;

    public static void main(String[] args) throws ExecutionException, InterruptedException{
    //  Runnable myrunnable = new myRunnable();
        ExecutorService execute = Executors.newFixedThreadPool(nThreads);

        for (int i = 0; i < nThreads; ++i) {
            execute.execute(new MyTask());
        }           

        execute.awaitTermination(1000, TimeUnit.MILLISECONDS);

        execute.shutdown();
    }
}

class MyTask implements Runnable {
    public static final int maxCalls = 10;
    public static final int sleepMillis = 500;
    private static HttpResponse response;
    private static HttpClient httpclient;

    public void run(){
        int counter = …
Run Code Online (Sandbox Code Playgroud)

java multithreading httpclient

7
推荐指数
3
解决办法
2万
查看次数

不寻常的Hadoop错误 - 任务自行被杀

当我运行我的hadoop作业时,我收到以下错误:

用户已收到要求杀死任务'attempt_201202230353_23186_r_000004_0'的请求已由用户杀死KILLED_UNCLEAN

日志看起来很干净.我运行了28个减速器,这并不适用于所有减速器.它适用于选定的几个,并且减速器再次启动.我不明白这一点.我注意到的另一件事是,对于一个小数据集,我很少看到这个错误!

hadoop

6
推荐指数
1
解决办法
5012
查看次数

标签 统计

hadoop ×3

java ×2

gson ×1

httpclient ×1

json ×1

log4j ×1

multithreading ×1