失败,发生异常java.io.IOException:org.apache.avro.AvroTypeException:发现的时间很长,期望在配置单元中进行合并

Sij*_*eph 5 java hadoop hive

需要帮忙!!!

我正在使用flume将Twitter提要流式传输到hdfs中,并将其加载hive进行分析。

步骤如下:

hdfs中的数据:

我已经avro schemaavsc文件中描述了并将其放入hadoop:

 {"type":"record",
 "name":"Doc",
 "doc":"adoc",
 "fields":[{"name":"id","type":"string"},
       {"name":"user_friends_count","type":["int","null"]},
       {"name":"user_location","type":["string","null"]},
       {"name":"user_description","type":["string","null"]},
       {"name":"user_statuses_count","type":["int","null"]},
       {"name":"user_followers_count","type":["int","null"]},
       {"name":"user_name","type":["string","null"]},
       {"name":"user_screen_name","type":["string","null"]},
       {"name":"created_at","type":["string","null"]},
       {"name":"text","type":["string","null"]},
       {"name":"retweet_count","type":["boolean","null"]},
       {"name":"retweeted","type":["boolean","null"]},
       {"name":"in_reply_to_user_id","type":["long","null"]},
       {"name":"source","type":["string","null"]},
       {"name":"in_reply_to_status_id","type":["long","null"]},
       {"name":"media_url_https","type":["string","null"]},
       {"name":"expanded_url","type":["string","null"]}]}
Run Code Online (Sandbox Code Playgroud)

我写了一个.hql文件来创建表并在其中加载数据:

 create table tweetsavro
    row format serde
        'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
    stored as inputformat
        'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
    outputformat
        'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
    tblproperties ('avro.schema.url'='hdfs:///avro_schema/AvroSchemaFile.avsc');

    load data inpath '/test/twitter_data/FlumeData.*' overwrite into table tweetsavro;
Run Code Online (Sandbox Code Playgroud)

我已经成功运行.hql文件,但是当我select *from <tablename>在蜂巢中运行命令时,它显示以下错误:

错误

tweetsavro的输出为:

hive> desc tweetsavro;
OK
id                      string                                      
user_friends_count      int                                         
user_location           string                                      
user_description        string                                      
user_statuses_count     int                                         
user_followers_count    int                                         
user_name               string                                      
user_screen_name        string                                      
created_at              string                                      
text                    string                                      
retweet_count           boolean                                     
retweeted               boolean                                     
in_reply_to_user_id     bigint                                      
source                  string                                      
in_reply_to_status_id   bigint                                      
media_url_https         string                                      
expanded_url            string                                      
Time taken: 0.697 seconds, Fetched: 17 row(s)
Run Code Online (Sandbox Code Playgroud)

小智 6

我正面临着完全相同的问题。这个问题存在于时间戳字段(在您的情况下为“ created_at”列)中,我试图将其作为字符串插入到新表中。我的假设是,这些数据将采用[ "null","string"]我的源格式。我分析了从sqoop import --as-avrodatafile流程生成的源avro模式。从导入生成的avro模式的timestamp列具有以下签名。
{ "name" : "order_date", "type" : [ "null", "long" ], "default" : null, "columnName" : "order_date", "sqlType" : "93" },

SqlType 93代表时间戳数据类型。因此,在目标表Avro架构文件中,我将数据类型更改为“ long”,从而解决了该问题。我的猜测可能是您其中一列中的数据类型不匹配。