Luc*_*cas 9 python csv dictionary
我正在运行一个一直为我工作的代码.这次我在2个.csv文件上运行它:"data"(24 MB)和"data1"(475 MB)."data"有3列,每列约680000个元素,而"data1"有3列,每列33000000个元素.当我运行代码时,经过大约5分钟的处理后,我就会被"杀死:9".如果这是一个内存问题,如何解决呢?欢迎任何建议!
这是代码:
import csv
import numpy as np
from collections import OrderedDict # to save keys order
from numpy import genfromtxt
my_data = genfromtxt('data.csv', dtype='S',
delimiter=',', skip_header=1)
my_data1 = genfromtxt('data1.csv', dtype='S',
delimiter=',', skip_header=1)
d= OrderedDict((rows[2],rows[1]) for rows in my_data)
d1= dict((rows[0],rows[1]) for rows in my_data1)
dset = set(d) # returns keys
d1set = set(d1)
d_match = dset.intersection(d1) # returns matched keys
import sys
sys.stdout = open("rs_pos_ref_alt.csv", "w")
for row in my_data:
if row[2] in d_match:
print [row[1], row[2]]
Run Code Online (Sandbox Code Playgroud)
"数据"的标题是:
dbSNP RS ID Physical Position
0 rs4147951 66943738
1 rs2022235 14326088
2 rs6425720 31709555
3 rs12997193 106584554
4 rs9933410 82323721
5 rs7142489 35532970
Run Code Online (Sandbox Code Playgroud)
"data1"的标题是:
V2 V4 V5
10468 TC T
10491 CC C
10518 TG T
10532 AG A
10582 TG T
Run Code Online (Sandbox Code Playgroud)
小智 10
很可能内核会杀死它,因为你的脚本占用了太多的内存.您需要采取不同的方法,并尝试最小化内存中的数据大小.
您可能还会发现这个问题很有用:使用Python和NumPy的非常大的矩阵
在下面的代码片段中,我尝试data1.csv通过逐行处理来避免将大量内容加载到内存中.试试看.
import csv
from collections import OrderedDict # to save keys order
with open('data.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader) #skip header
d = OrderedDict((rows[2], {"val": rows[1], "flag": False}) for rows in reader)
with open('data1.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader) #skip header
for rows in reader:
if rows[0] in d:
d[rows[0]]["flag"] = True
import sys
sys.stdout = open("rs_pos_ref_alt.csv", "w")
for k, v in d.iteritems():
if v["flag"]:
print [v["val"], k]
Run Code Online (Sandbox Code Playgroud)