在Nodejs中解析大型JSON文件并独立处理每个对象

Lix*_*ang 19 javascript parsing json node.js

我需要在Nodejs中读取一个大的JSON文件(大约630MB)并将每个对象插入到MongoDB中.

我在这里读到了答案:在Nodejs中解析大型JSON文件.

但是,答案是逐行处理JSON文件,而不是逐个对象地处理它.因此,我仍然不知道如何从该文件中获取对象并进行操作.

我的JSON文件中有大约100,000个这种对象.

数据格式:

[
  {
    "id": "0000000",
    "name": "Donna Blak",
    "livingSuburb": "Tingalpa",
    "age": 53,
    "nearestHospital": "Royal Children's Hospital",
    "treatments": {
        "19890803": {
            "medicine": "Stomach flu B",
            "disease": "Stomach flu"
        },
        "19740112": {
            "medicine": "Progeria C",
            "disease": "Progeria"
        },
        "19830206": {
            "medicine": "Poliomyelitis B",
            "disease": "Poliomyelitis"
        }
    },
    "class": "patient"
  },
 ...
]
Run Code Online (Sandbox Code Playgroud)

干杯,

亚历克斯

Ant*_*ich 31

有一个名为'stream-json'的漂亮模块可以完全满足您的需求.

它可以解析远远超过可用内存的JSON文件.

StreamArray处理一个常见的用例:类似于Django生成的数据库转储的大量相对较小的对象.它单独流式传输阵列组件,并自动组装它们.

这是一个非常基本的例子:

const StreamArray = require('stream-json/streamers/StreamArray');
const path = require('path');
const fs = require('fs');

const jsonStream = StreamArray.withParser();

//You'll get json objects here
//Key is an array-index here
jsonStream.on('data', ({key, value}) => {
    console.log(key, value);
});

jsonStream.on('end', () => {
    console.log('All done');
});

const filename = path.join(__dirname, 'sample.json');
fs.createReadStream(filename).pipe(jsonStream.input);
Run Code Online (Sandbox Code Playgroud)

如果您想要做一些更复杂的事情,例如按顺序处理一个对象(保持顺序)并为每个对象应用一些异步操作,那么您可以像这样执行自定义可写流:

const StreamArray = require('stream-json/streamers/StreamArray');
const {Writable} = require('stream');
const path = require('path');
const fs = require('fs');

const fileStream = fs.createReadStream(path.join(__dirname, 'sample.json'));
const jsonStream = StreamArray.withParser();

const processingStream = new Writable({
    write({key, value}, encoding, callback) {
        //Save to mongo or do any other async actions

        setTimeout(() => {
            console.log(value);
            //Next record will be read only current one is fully processed
            callback();
        }, 1000);
    },
    //Don't skip this, as we need to operate with objects, not buffers
    objectMode: true
});

//Pipe the streams as follows
fileStream.pipe(jsonStream.input);
jsonStream.pipe(processingStream);

//So we're waiting for the 'finish' event when everything is done.
processingStream.on('finish', () => console.log('All done'));
Run Code Online (Sandbox Code Playgroud)

请注意:上面的例子都经过了'stream-json@1.1.3'的测试.对于某些以前的版本(大概是1.0.0),您可能需要:

const StreamArray = require('stream-json/utils/StreamArray');

然后

const jsonStream = StreamArray.make();

  • 到目前为止,Stream-json必须是最好的json流阅读器.它使我免于创建自己的流并挑选出每个对象.谢谢你的回答.我有同样的问题,我的内存不足,唯一的解决方案是一次流式传输每个对象. (3认同)
  • @PrestonDocks 该软件包于 2018 年更新,主要版本跳转(性能更好,实用性更强)。要么使用以前的主要版本,要么阅读新文档并相应地更新您的代码。 (2认同)
  • @PrestonDocks,更新了答案,你介意看看吗。 (2认同)