Jul*_*ian 10 java kaitai-struct
我正在使用Kaitai-Struct解析Java中的大型PCAP文件。每当文件大小超过Integer.MAX_VALUE字节数时,我就面临IllegalArgumentException由基础的大小限制引起的问题ByteBuffer。
I haven't found references to this issue elsewhere, which leads me to believe that this is not a library limitation but a mistake in the way I'm using it.
Since the problem is caused by trying to map the whole file into the ByteBuffer I'd think that the solution would be mapping only the first region of the file, and as the data is being consumed map again skipping the data already parsed.
As this is done within the Kaitai Struct Runtime library it would mean to write my own class extending fom KatiaiStream and overwrite the auto-generated fromFile(...) method, and this doesn't really seem the right approach.
The auto-generated method to parse from file for the PCAP class is.
public static Pcap fromFile(String fileName) throws IOException {
  return new Pcap(new ByteBufferKaitaiStream(fileName));
}
Run Code Online (Sandbox Code Playgroud)
And the ByteBufferKaitaiStream provided by the Kaitai Struct Runtime library is backed by a ByteBuffer.
private final FileChannel fc;
private final ByteBuffer bb;
public ByteBufferKaitaiStream(String fileName) throws IOException {
    fc = FileChannel.open(Paths.get(fileName), StandardOpenOption.READ);
    bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size());
}
Run Code Online (Sandbox Code Playgroud)
Which in turn is limitted by the ByteBuffer max size.
Am I missing some obvious workaround? Is it really a limitation of the implementation of Katiati Struct in Java?
这里有两个单独的问题:
\n\n运行Pcap.fromFile()大文件通常不是一个非常有效的方法,因为您最终会立即将所有文件解析到内存数组中。kaitai_struct/issues/255中给出了如何避免这种情况的示例。基本思想是,您希望控制如何读取每个数据包,然后在以某种方式解析/计算每个数据包后处理它。
Java 映射文件的 2GB 限制。为了缓解这种情况,您可以使用替代的基于 RandomAccessFile 的 KaitaiStream 实现:RandomAccessFileKaitaiStream \xe2\x80\x94 它可能会更慢,但应该避免 2GB 问题。
|   归档时间:  |  
           
  |  
        
|   查看次数:  |  
           211 次  |  
        
|   最近记录:  |