我想使用Apache的parquet-mr项目以Java编程方式读/写Parquet文件.我似乎无法找到有关如何使用此API的任何文档(除了浏览源代码并查看它是如何使用的) - 只是想知道是否存在任何此类文档?
我写了一篇关于阅读镶木地板文件的博客文章(http://www.jofre.de/?p=1459),并提出了以下能够读取INT96字段的解决方案.
您需要以下maven依赖项:
<dependencies>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-hadoop</artifactId>
<version>1.9.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.0</version>
</dependency>
</dependencies>
Run Code Online (Sandbox Code Playgroud)
代码基本上是:
public class Main {
private static Path path = new Path("file:\\C:\\Users\\file.snappy.parquet");
private static void printGroup(Group g) {
int fieldCount = g.getType().getFieldCount();
for (int field = 0; field < fieldCount; field++) {
int valueCount = g.getFieldRepetitionCount(field);
Type fieldType = g.getType().getType(field);
String fieldName = fieldType.getName();
for (int index = 0; index < valueCount; index++) {
if (fieldType.isPrimitive()) {
System.out.println(fieldName + " " + g.getValueToString(field, index));
}
}
}
}
public static void main(String[] args) throws IllegalArgumentException {
Configuration conf = new Configuration();
try {
ParquetMetadata readFooter = ParquetFileReader.readFooter(conf, path, ParquetMetadataConverter.NO_FILTER);
MessageType schema = readFooter.getFileMetaData().getSchema();
ParquetFileReader r = new ParquetFileReader(conf, path, readFooter);
PageReadStore pages = null;
try {
while (null != (pages = r.readNextRowGroup())) {
final long rows = pages.getRowCount();
System.out.println("Number of rows: " + rows);
final MessageColumnIO columnIO = new ColumnIOFactory().getColumnIO(schema);
final RecordReader<Group> recordReader = columnIO.getRecordReader(pages, new GroupRecordConverter(schema));
for (int i = 0; i < rows; i++) {
final Group g = recordReader.read();
printGroup(g);
// TODO Compare to System.out.println(g);
}
}
} finally {
r.close();
}
} catch (IOException e) {
System.out.println("Error reading parquet file.");
e.printStackTrace();
}
}
}
Run Code Online (Sandbox Code Playgroud)
小智 1
文档有点稀疏,代码的文档也比较简洁。我发现如果您可以选择的话,使用 ORC 会更容易。
下面的代码片段使用 Avro 接口将 Parquet 文件转换为带有标题行的 CSV - 如果文件中有 INT96(Hive 时间戳)类型(Avro 接口限制)并且小数以字节数组形式显示,则转换将会失败。
确保您使用 parquet-avro 库的 1.9.0 或更高版本,否则日志记录会有点混乱。
BufferedWriter out = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(java.io.FileDescriptor.out), "ASCII"));
ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord>builder(path).build();
Schema sc = null;
List<Field> fields = null;
for(long i = 0; i < lines; i++) {
GenericRecord result = reader.read();
if(result == null) {
break;
}
if(i == 0) {
sc = result.getSchema();
fields = sc.getFields();
if(header) { // print header out?
for(int j = 0; j < fields.size(); j++) {
if(j != 0) {
out.write(",");
}
out.write(fields.get(j).name());
}
out.newLine();
}
}
for(int j = 0; j < fields.size(); j++) {
if(j != 0) {
out.write(",");
}
Object o = result.get(j);
if(o != null) {
String v = o.toString();
if(!v.equals("null")) {
out.write("\"" + v + "\"");
}
}
}
out.newLine();
}
out.flush();
reader.close();
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
7428 次 |
| 最近记录: |