python加载1GB大json文件时内存错误如何解决?

lpd*_*lpd 1 python csv io out-of-memory

我正在尝试将 json 文件转换为 csv 但出现内存错误。有没有有效的方法来微调此代码以在 python 中处理大型 json 文件。

def change(row, pastkeys=()):
result = {}
c=0
for key in row:
    c=c+1
    newkey = pastkeys + (key,)
    print key
    val = row[key]
    if isinstance(val, dict):
        result.update(change(val, newkey))
    elif isinstance(val, list):
        result.update(change(dict(zip(range(0, len(val)), val)), newkey))
    else:
        result[newkey] = val
return result
a=open(sys.argv[1],'r')
lines=list(a)
 print lines
out1=open(sys.argv[2],'w')
try:
  data = json.loads(''.join(lines))
  if isinstance(data, dict):
    data = [data]
  except ValueError:
    data = [json.loads(line) for line in lines]
 result = []
 fields = set()
 for row in data:
    hash = change(row)
    fields |= set(hash.keys()
    result.append(hash)
out1=open(sys.argv[2],'w+')
fields = sorted(fields)
out = csv.writer(out1,lineterminator='\n')
out.writerow(['-'.join([str(f) for f in field]) for field in fields])
for row in result:
out.writerow([(row.get(field,'')) for field  in fields ])

a.close()
Run Code Online (Sandbox Code Playgroud)

sal*_*ssi 5

您可以尝试使用ijson。它是一个将 JSON 作为流而不是块文件使用的模块。ijson之于 JSON 就像 SAX 之于 XML。

import ijson
for prefix, theType, value in ijson.parse(open(jsonFileName)):
    print prefix, theType, value
Run Code Online (Sandbox Code Playgroud)