bai*_*a s 3 hive bucket bigdata apache-spark apache-spark-sql
如何从配置单元中的第n个存储桶中获取所有记录。
从存储桶9中选择* from bucketTable;
您可以通过不同的方法来实现:
方法1:通过
stored location从desc formatted <db>.<tab_name>然后
9th bucket直接从中读取文件HDFS filesystem。
(要么)
方法2:使用input_file_name()
然后
9th bucket使用文件名仅过滤数据
Example:
Approach-1:
Scala:
val df = spark.sql("desc formatted <db>.<tab_name>")
//get table location in hdfs path
val loc_hdfs = df.filter('col_name === "Location").select("data_type").collect.map(x => x(0)).mkString
//based on your table format change the read format
val ninth_buk = spark.read.orc(s"${loc_hdfs}/000008_0*")
//display the data
ninth_buk.show()
Run Code Online (Sandbox Code Playgroud)
Pyspark:
from pyspark.sql.functions import *
df = spark.sql("desc formatted <db>.<tab_name>")
loc_hdfs = df.filter(col("col_name") == "Location").select("data_type").collect()[0].__getattr__("data_type")
ninth_buk = spark.read.orc(loc_hdfs + "/000008_0*")
ninth_buk.show()
Run Code Online (Sandbox Code Playgroud)
Approach-2:
Scala:
val df = spark.read.table("<db>.<tab_name>")
//add input_file_name
val df1 = df.withColumn("filename",input_file_name())
#filter only the 9th bucket filename and select only required columns
val ninth_buk = df1.filter('filename.contains("000008_0")).select(df.columns.head,df.columns.tail:_*)
ninth_buk.show()
Run Code Online (Sandbox Code Playgroud)
pyspark:
from pyspark.sql.functions import *
df = spark.read.table("<db>.<tab_name>")
df1 = df.withColumn("filename",input_file_name())
ninth_buk = df1.filter(col("filename").contains("000008_0")).select(*df.columns)
ninth_buk.show()
Run Code Online (Sandbox Code Playgroud)
如果您有大量数据,则不建议使用Approach-2,因为我们需要对整个数据帧进行过滤。
In Hive:
set hive.support.quoted.identifiers=none;
select `(fn)?+.+` from (
select *,input__file__name fn from table_name)e
where e.fn like '%000008_0%';
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
73 次 |
| 最近记录: |