Cha*_*haz 6 json data-partitioning jq
我有一个大的JSON文件,我猜测有400万个对象.每个顶级都有几个嵌套在里面的级别.我想将它分成多个10000个顶级对象的文件(保留每个内部的结构).jq应该能够做到这一点吗?我不知道怎么做.
所以这样的数据:
[{
"id": 1,
"user": {
"name": "Nichols Cockle",
"email": "ncockle0@tmall.com",
"address": {
"city": "Turt",
"state": "Th? Tr?n Yên Phú"
}
},
"product": {
"name": "Lychee - Canned",
"code": "36987-1526"
}
}, {
"id": 2,
"user": {
"name": "Isacco Scrancher",
"email": "iscrancher1@aol.com",
"address": {
"city": "Likwatang Timur",
"state": "Biharamulo"
}
},
"product": {
"name": "Beer - Original Organic Lager",
"code": "47993-200"
}
}, {
"id": 3,
"user": {
"name": "Elga Sikora",
"email": "esikora2@statcounter.com",
"address": {
"city": "Wenheng",
"state": "Piedra del Águila"
}
},
"product": {
"name": "Parsley - Dried",
"code": "36987-1632"
}
}, {
"id": 4,
"user": {
"name": "Andria Keatch",
"email": "akeatch3@salon.com",
"address": {
"city": "Arras",
"state": "Iracemápolis"
}
},
"product": {
"name": "Wine - Segura Viudas Aria Brut",
"code": "51079-385"
}
}, {
"id": 5,
"user": {
"name": "Dara Sprowle",
"email": "dsprowle4@slate.com",
"address": {
"city": "Huatai",
"state": "Kaduna"
}
},
"product": {
"name": "Pork - Hock And Feet Attached",
"code": "0054-8648"
}
}]
Run Code Online (Sandbox Code Playgroud)
这是一个完整的对象:
{
"id": 1,
"user": {
"name": "Nichols Cockle",
"email": "ncockle0@tmall.com",
"address": {
"city": "Turt",
"state": "Th? Tr?n Yên Phú"
}
},
"product": {
"name": "Lychee - Canned",
"code": "36987-1526"
}
}
Run Code Online (Sandbox Code Playgroud)
每个文件都是指定数量的对象.
可以使用 .json 对 json 文件或流进行切片jq。请参阅下面的脚本。该sliceSize参数设置切片的大小并确定同时在内存中保存多少个输入。这允许控制内存使用。
输入不必格式化。
由于输入是可能的:
可以使用格式化或紧凑的 Json 创建文件
切片输出文件可以包含:
快速基准测试显示切片过程中的时间和内存消耗(在我的笔记本电脑上测量)
#!/bin/bash
SLICE_SIZE=2
JQ_SLICE_INPUTS='
2376123525 as $EOF | # random number that does not occur in the input stream to mark the end of the stream
foreach (inputs, $EOF) as $input
(
# init state
[[], []]; # .[0]: array to collect inputs
# .[1]: array that has collected $sliceSize inputs and is ready to be extracted
# update state
if .[0] | length == $sliceSize # enough inputs collected
or $input == $EOF # or end of stream reached
then [[$input], .[0]] # create new array to collect next inputs. Save array .[0] with $sliceSize inputs for extraction
else [.[0] + [$input], []] # collect input, nothing to extract after this state update
end;
# extract from state
if .[1] | length != 0
then .[1] # extract array that has collected $sliceSize inputs
else empty # nothing to extract right now (because still collecting inputs into .[0])
end
)
'
write_files() {
local FILE_NAME_PREFIX=$1
local FILE_COUNTER=0
while read line; do
FILE_COUNTER=$((FILE_COUNTER + 1))
FILE_NAME="${FILE_NAME_PREFIX}_$FILE_COUNTER.json"
echo "writing $FILE_NAME"
jq '.' > $FILE_NAME <<< "$line" # array of formatted json inputs
# jq -c '.' > $FILE_NAME <<< "$line" # compact array of json inputs
# jq '.[]' > $FILE_NAME <<< "$line" # stream of formatted json inputs
# jq -c '.[]' > $FILE_NAME <<< "$line" # stream of compact json inputs
done
}
echo "how to slice a stream of json inputs"
jq -n '{id: (range(5) + 1), a:[1,2]}' | # create a stream of json inputs
jq -n -c --argjson sliceSize $SLICE_SIZE "$JQ_SLICE_INPUTS" |
write_files "stream_of_json_inputs_sliced"
echo -e "\nhow to slice an array of json inputs"
jq -n '[{id: (range(5) + 1), a:[1,2]}]' | # create an array of json inputs
jq -n --stream 'fromstream(1|truncate_stream(inputs))' | # remove outer array to create stream of json inputs
jq -n -c --argjson sliceSize $SLICE_SIZE "$JQ_SLICE_INPUTS" |
write_files "array_of_json_inputs_sliced"
Run Code Online (Sandbox Code Playgroud)
how to slice a stream of json inputs
writing stream_of_json_inputs_sliced_1.json
writing stream_of_json_inputs_sliced_2.json
writing stream_of_json_inputs_sliced_3.json
how to slice an array of json inputs
writing array_of_json_inputs_sliced_1.json
writing array_of_json_inputs_sliced_2.json
writing array_of_json_inputs_sliced_3.json
Run Code Online (Sandbox Code Playgroud)
array_of_json_inputs_sliced_1.jsonhow to slice a stream of json inputs
writing stream_of_json_inputs_sliced_1.json
writing stream_of_json_inputs_sliced_2.json
writing stream_of_json_inputs_sliced_3.json
how to slice an array of json inputs
writing array_of_json_inputs_sliced_1.json
writing array_of_json_inputs_sliced_2.json
writing array_of_json_inputs_sliced_3.json
Run Code Online (Sandbox Code Playgroud)
array_of_json_inputs_sliced_2.json[
{
"id": 1,
"a": [1,2]
},
{
"id": 2,
"a": [1,2]
}
]
Run Code Online (Sandbox Code Playgroud)
array_of_json_inputs_sliced_3.json[
{
"id": 3,
"a": [1,2]
},
{
"id": 4,
"a": [1,2]
}
]
Run Code Online (Sandbox Code Playgroud)
[编辑:此答案已根据对该问题的修订进行了修订。]
使用jq解决问题的关键是-c命令行选项,该选项以JSON-Lines格式(即,在当前情况下,每行一个对象)生成输出。然后,您可以使用诸如awk或的工具split,将这些行分配到多个文件中。
如果文件不是太大,那么最简单的方法是使用以下命令启动管道:
jq -c '.[]' INPUTFILE
Run Code Online (Sandbox Code Playgroud)
如果文件太大而无法容纳在内存中,则可以使用jq的流解析器,如下所示:
jq -cn --stream 'fromstream(1|truncate_stream(inputs))'
Run Code Online (Sandbox Code Playgroud)
有关流解析器的更多讨论,请参见例如jq FAQ中的相关部分:https : //github.com/stedolan/jq/wiki/FAQ#streaming-json-parser
有关对第一步中产生的输出进行分区的不同方法,请参见例如如何将大文本文件拆分为行数相等的小文件?
如果要求每个输出文件都是对象数组,那么我可能会用awk一步来执行分区和重构,但是还有许多其他合理的方法。
作为参考,如果原始文件由JSON对象的流或序列组成,则适当的调用将是:
jq -n -c inputs INPUTFILE
Run Code Online (Sandbox Code Playgroud)
inputs以这种方式使用允许任意地有效地处理许多对象。
| 归档时间: |
|
| 查看次数: |
2802 次 |
| 最近记录: |