Sha*_*ick 3 java csv scala apache-arrow
我目前正在使用Apache Arrow的java API(尽管我在Scala中使用它来获取代码示例)以熟悉这个工具.
作为练习,我选择将CSV文件加载到箭头向量中,然后将这些文件保存到箭头文件中.第一部分似乎很容易,我试过这样:
val csvLines: Stream[Array[String]] = <open stream from CSV parser>
// There are other types of allocator, but things work with this one...
val allocator = new RootAllocator(Int.MaxValue)
// Initialize the vectors
val vectors = initVectors(csvLines.head, allocator)
// Put their mutators into an array for easy access
val mutators = vectors.map(_.getMutator)
// Work on the data, zipping it with its index
Stream.from(0)
.zip(csvLines.tail) // Work on the tail (head contains the headers)
.foreach(rowTup => // rowTup = (index, csvRow as an Array[String])
Range(0, rowTup._2.size) // Iterate on each column...
.foreach(columnNumber =>
writeToMutator(
mutators(columnNumber), // get that column's mutator
idx=rowTup._1, // pass the current row number
data=rowTup._2(columnNumber) // pass the entry of the curernt column
)
)
)
Run Code Online (Sandbox Code Playgroud)
用initVectors()和writeToMutator()定义为:
def initVectors(
columns: Array[String],
alloc: RootAllocator): Array[NullableVarCharVector] = {
// Initialize a vector for each column
val vectors = columns.map(colName =>
new NullableVarCharVector(colName, alloc))
// 4096 size, for 1024 values initially. This is arbitrary
vectors.foreach(_.allocateNew(2^12,1024))
vectors
}
def writeToMutator(
mutator: NullableVarCharVector#Mutator,
idx: Int,
data: String): Unit = {
// The CSV may contain null values
if (data != null) {
val bytes = data.getBytes()
mutator.setSafe(idx, bytes, 0, bytes.length)
}
mutator.setNull(idx)
}
Run Code Online (Sandbox Code Playgroud)
(我目前不关心使用正确的类型,并将所有内容存储为字符串,或存储VarChar在箭头的燕鸥中)
所以在这一点上,我有一个集合,NullableVarCharVector可以从/向他们读写.这一切都很棒.现在,为了下一步,我不知道如何将它们实际包装在一起并将它们序列化为箭头文件.我偶然发现了一个AbstractFieldWriter抽象类,但是如何使用这些实现还不清楚.
所以,问题主要是:
编辑补充:该元数据描述页面提供了有关该主题的一个很好的概述.
api的测试类似乎包含一些可能有用的东西,我会在尝试后用样本发布回复.
看看TestArrowFile.java和BaseFileTest.java我发现:
因此,填充向量现在看起来像:
// Open stream of rows
val csvLines: Stream[Array[String]] = <open stream from CSV parser>
// Define a parent to hold the vectors
val parent = MapVector.empty("parent", allocator)
// Create a new writer. VarCharWriterImpl would probably do as well?
val writer = new ComplexWriterImpl("root", parent)
// Initialise a writer for each column, using the header as the name
val rootWriter = writer.rootAsMap()
val writers = csvLines.head.map(colName =>
rootWriter.varChar(colName))
Stream.from(0)
.zip(csvLines.tail) // Zip the rows with their index
.foreach( rowTup => { // Iterate on each (index, row) tuple
val (idx, row) = rowTup
Range(0, row.size) // Iterate on each field of the row
.foreach(column =>
Option(row(column)) // row(column) may be null,
.foreach(str => // use the option as a null check
write(writers(column), idx, allocator, str)
)
)
}
)
toFile(parent.getChild("root"), "csv.arrow") // Save everything to a file
Run Code Online (Sandbox Code Playgroud)
以write定义为:
def write(writer: VarCharWriter, idx: Int,
allocator: BufferAllocator, data: String): Unit = {
// Set the position to the correct index
writer.setPosition(idx)
val bytes = data.getBytes()
// Apparently the allocator is required again to build a new buffer
val varchar = allocator.buffer(bytes.length)
varchar.setBytes(0, data.getBytes())
writer.writeVarChar(0, bytes.length, varchar)
}
def toFile(parent: FieldVector, fName: String): Unit = {
// Extract a schema from the parent: that's the part I struggled with in the original question
val rootSchema = new VectorSchemaRoot(parent)
val stream = new FileOutputStream(fName)
val fileWriter = new ArrowFileWriter(
rootSchema,
null, // We don't use dictionary encoding.
stream.getChannel)
// Write everything to file...
fileWriter.start()
fileWriter.writeBatch()
fileWriter.end()
stream.close()
}
Run Code Online (Sandbox Code Playgroud)
通过上面我可以将CSV保存到文件.我通过阅读并再次将其转换为CSV来检查一切顺利,内容不变.
请注意,ComplexWriterImpl允许编写不同类型的列,这些列将派上用场,以避免将数字列存储为字符串.
(我现在正在玩阅读方面的东西,这些东西可能值得他们自己的SO问题.)