数据未加载到Hive中的分区表中

Unm*_*eni 6 hadoop hive mapreduce partition

我正在尝试为我的表创建分区以便更新值.

这是我的样本数据

1,Anne,Admin,50000,A
2,Gokul,Admin,50000,B
3,Janet,Sales,60000,A
Run Code Online (Sandbox Code Playgroud)

我想更新Janet's.

因此,为此,我创建了一个以Department作为分区的表.

创建外部表跟踪(EmployeeID Int,FirstName String,Designation String,Salary Int)PARTITIONED BY(Department String)行格式分隔的字段以","location"/ user/sreeveni/HIVE"终止;

但在做上述命令的同时.没有数据插入到跟踪表中.

hive>select * from trail;                               
OK
Time taken: 0.193 seconds

hive>desc trail;                                        
OK
employeeid              int                     None                
firstname               string                  None                
designation             string                  None                
salary                  int                     None                
department              string                  None                

# Partition Information      
# col_name              data_type               comment             

department              string                  None   
Run Code Online (Sandbox Code Playgroud)

我做错了吗?

UPDATE

正如建议我尝试将数据插入到我的表中

加载数据在路径'/ user/aibladmin/HIVE'覆盖到表跟踪分区(部门);

但它正在显示

FAILED:SemanticException [错误10096]:动态分区严格模式至少需要一个静态分区列.要关闭它,请设置hive.exec.dynamic.partition.mode = nonstrict

设置后set hive.exec.dynamic.partition.mode=nonstrict也没有工作正常.

还有别的事可做.

小智 17

尝试以下两个属性

SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
Run Code Online (Sandbox Code Playgroud)

在为分区表编写insert语句时,请确保在select子句中指定最后一个分区列. 


use*_*461 2

请尝试以下操作:

首先创建表:

create external table test23 (EmployeeID Int,FirstName String,Designation String,Salary Int) PARTITIONED BY (Department String) row format delimited fields terminated by "," location '/user/rocky/HIVE';
Run Code Online (Sandbox Code Playgroud)

在 hdfs 中创建一个目录,分区名称为:

$ hadoop fs -mkdir /user/rocky/HIVE/department=50000
Run Code Online (Sandbox Code Playgroud)

abc.txt通过过滤部门等于 50000 的记录来创建本地文件:

$ cat abc.txt 
1,Anne,Admin,50000,A
2,Gokul,Admin,50000,B
Run Code Online (Sandbox Code Playgroud)

将其放入HDFS:

$ hadoop fs -put /home/yarn/abc.txt /user/rocky/HIVE/department=50000
Run Code Online (Sandbox Code Playgroud)

现在更改表:

ALTER TABLE test23 ADD PARTITION(department=50000);
Run Code Online (Sandbox Code Playgroud)

并检查结果:

select * from test23 ;
Run Code Online (Sandbox Code Playgroud)