找到接口org.apache.hadoop.mapreduce.TaskAttemptContext

Rig*_*Rig 6 java hadoop mapreduce avro

到目前为止还没有看到我的特定问题的解决方案.它至少不起作用.它让我很疯狂.这个特殊的组合似乎在谷歌空间中没有很多.我的错误发生在作业根据我的意思进入映射器时.此作业的输入是avro架构输出,虽然我尝试了未压缩,但使用deflate压缩.

Avro:1.7.7 Hadoop:2.4.1

我收到这个错误,我不知道为什么.这是我的工作,mapper和reduce.映射器进入时会发生错误.

示例未压缩的Avro输入文件(StockReport.SCHEMA以这种方式定义)

{"day": 3, "month": 2, "year": 1986, "stocks": [{"symbol": "AAME", "timestamp": 507833213000, "dividend": 10.59}]}
Run Code Online (Sandbox Code Playgroud)

工作

@Override
public int run(String[] strings) throws Exception {
    Job job = Job.getInstance();
    job.setJobName("GenerateGraphsJob");
    job.setJarByClass(GenerateGraphsJob.class);

    configureJob(job);

    int resultCode = job.waitForCompletion(true) ? 0 : 1;

    return resultCode;
}

private void configureJob(Job job) throws IOException {
    try {
        Configuration config = getConf();
        Path inputPath = ConfigHelper.getChartInputPath(config);
        Path outputPath = ConfigHelper.getChartOutputPath(config);

        job.setInputFormatClass(AvroKeyInputFormat.class);
        AvroKeyInputFormat.addInputPath(job, inputPath);
        AvroJob.setInputKeySchema(job, StockReport.SCHEMA$);


        job.setMapperClass(StockAverageMapper.class);
        job.setCombinerClass(StockAverageCombiner.class);
        job.setReducerClass(StockAverageReducer.class);

        FileOutputFormat.setOutputPath(job, outputPath);

    } catch (IOException | ClassCastException e) {
        LOG.error("An job error has occurred.", e);
    }
}
Run Code Online (Sandbox Code Playgroud)

制图员:

public class StockAverageMapper extends
        Mapper<AvroKey<StockReport>, NullWritable, StockYearSymbolKey, StockReport> {
    private static Logger LOG = LoggerFactory.getLogger(StockAverageMapper.class);

private final StockReport stockReport = new StockReport();
private final StockYearSymbolKey stockKey = new StockYearSymbolKey();

@Override
protected void map(AvroKey<StockReport> inKey, NullWritable ignore, Context context)
        throws IOException, InterruptedException {
    try {
        StockReport inKeyDatum = inKey.datum();
        for (Stock stock : inKeyDatum.getStocks()) {
            updateKey(inKeyDatum, stock);
            updateValue(inKeyDatum, stock);
            context.write(stockKey, stockReport);
        }
    } catch (Exception ex) {
        LOG.debug(ex.toString());
    }
}
Run Code Online (Sandbox Code Playgroud)

地图输出键的模式:

    {
  "namespace": "avro.model",
  "type": "record",
  "name": "StockYearSymbolKey",
  "fields": [
    {
      "name": "year",
      "type": "int"
    },
    {
      "name": "symbol",
      "type": "string"
    }
  ]
}
Run Code Online (Sandbox Code Playgroud)

堆栈跟踪:

java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
    at org.apache.avro.mapreduce.AvroKeyInputFormat.createRecordReader(AvroKeyInputFormat.java:47)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.<init>(MapTask.java:492)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:735)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Run Code Online (Sandbox Code Playgroud)

编辑:不重要但我正在努力将其减少为可以创建JFreeChart输出的数据.没有通过映射器,所以不应该相关.

Den*_*Huo 8

问题是org.apache.hadoop.mapreduce.TaskAttemptContext是Hadoop 1中的一个类,在Hadoop 2中成了一个接口.

这是依赖于Hadoop库的库需要为Hadoop 1和Hadoop 2分别编译jar文件的原因之一.根据您的堆栈跟踪,看起来不管怎么说你有一个Hadoop1编译的Avro jar文件,尽管运行Hadoop 2.4.1.

Avro下载镜像为avro -mapred-1.7.7-hadoop1.jaravro-mapred-1.7.7-hadoop2.jar提供了很好的单独下载.