加速Django表格将大型(500k obs)CSV文件上传到MySQL DB

PR1*_*012 2 python mysql csv django

Django表大约有430,000个obs和230mb文件; \来自一个平面的CSV文件,详细信息如下所示在\ MODELS.PY中.我考虑过使用CSV读取器的块,但我认为填充MySQL表的Processor\function是我的挂机; 这需要20个小时+ \我怎么加速这个?

class MastTable(models.Model):
    evidence = models.ForeignKey(Evidence, blank=False)
    var2 = models.CharField(max_length=10, blank=True, null=True)
    var3 = models.CharField(max_length=10, blank=True, null=True)
    var4 = models.CharField(max_length=10, blank=True, null=True)
    var5 = models.CharField(max_length=10, blank=True, null=True)
    var6 = models.DateTimeField(blank=True, null=True)
    var7 = models.DateTimeField(blank=True, null=True)
    var8 = models.DateTimeField(blank=True, null=True)
    var9 = models.DateTimeField(blank=True, null=True)
    var10 = models.DateTimeField(blank=True, null=True)
    var11 = models.DateTimeField(blank=True, null=True)
    var12 = models.DateTimeField(blank=True, null=True)
    var13 = models.DateTimeField(blank=True, null=True)
    var14 = models.CharField(max_length=500, blank=True, null=True)
    var15 = models.CharField(max_length=500, blank=True, null=True)
    var16 = models.CharField(max_length=50, blank=True, null=True)
    var17 = models.CharField(max_length=500, blank=True, null=True)
    var18 = models.CharField(max_length=500, blank=True, null=True)
    var19 = models.CharField(max_length=500, blank=True, null=True)
    var20 = models.CharField(max_length=500, blank=True, null=True)
    var21 = models.CharField(max_length=500, blank=True, null=True)
    var22 = models.CharField(max_length=500, blank=True, null=True)
    var23 = models.DateTimeField(blank=True, null=True)
    var24 = models.DateTimeField(blank=True, null=True)
    var25 = models.DateTimeField(blank=True, null=True)
    var26 = models.DateTimeField(blank=True, null=True)
Run Code Online (Sandbox Code Playgroud)

这个辅助函数将为CSV \创建一个reader对象,并在MySQL上传之前解码文件中的任何时髦编解码器

def unicode_csv_reader(utf8_data, dialect=csv.excel, **kwargs):
    csv_reader = csv.reader(utf8_data, dialect=dialect, **kwargs)
    for row in csv_reader:
        yield [unicode(cell, 'ISO-8859-1') for cell in row]
Run Code Online (Sandbox Code Playgroud)

然后,UTILS.PY文件中的一个函数将访问一个DB表(名为'extract_properties'),该表包含文件头,以标识要进入的处理器函数,处理器函数将如下所示,如下所示

def processor_table(extract_properties):  #Process the table into MySQL
    evidence_obj, created = Evidence.objects.get_or_create(case=case_obj, 
    evidence_number=extract_properties['evidence_number']) #This retrieves the Primary Key
    reader = unicode_csv_reader(extract_properties['uploaded_file'],dialect='pipes') #CSVfunction  
    for idx, row in enumerate(reader):
        if idx <= (extract_properties['header_row_num'])+3: #Header is not always 1st row of file
            pass
        else:
            try:
                obj, created = MastTable.objects.create( #I was originally using 'get_or_create'
                    evidence=evidence_obj,
                    var2=row[0],
                    var3=row[1],
                    var4=row[2],
                    var5=row[3],
                    var6=date_convert(row[4],row[5]), #funct using 'dateutil.parser.parse'
                    var7=date_convert(row[6],row[7]),
                    var8=date_convert(row[8],row[9]),
                    var9=date_convert(row[10],row[11]),
                    var10=date_convert(row[12],row[13]),
                    var11=date_convert(row[14],row[15]),
                    var12=date_convert(row[16],row[17]),
                    var13=date_convert(row[18],row[19]),
                    var14=row[20],
                    var15=row[21],
                    var16=row[22],
                    var17=row[23],
                    var18=row[24],
                    var19=row[25],
                    var20=row[26],
                    var21=row[27],
                    var22=row[28],
                    var23=date_convert(row[29],row[30]),
                    var24=date_convert(row[31],row[32]),
                    var25=date_convert(row[33],row[34]),
                    var26=date_convert(row[35],row[36]),)
            except Exception as e:  #This logs any exceptions to a custom DB table
                print "Error",e
                print "row",row
                print "idx:",idx
                SystemExceptionLog.objects.get_or_create(indexrow=idx, errormsg=e.args[0],     
                timestamp=datetime.datetime.now(),   
                uploadedfile=extract_properties['uploaded_file'])
                continue
    return True 
Run Code Online (Sandbox Code Playgroud)

最后下面的VIEWS.PY表单接受文件并调用上面的处理器来填充DB检查有效的表单数据并将任何文件传递给文件处理程序(如果有效)

def upload_file(request):
        if request.method == 'POST':
        form = UploadFileForm(request.POST, request.FILES)
        if form.is_valid():
            for _file in request.FILES.getlist('file'): 
                extract_properties = get_file_properties(_file) 
                if extract_properties:
                    for property in extract_properties: #File is found and processor kicked off 
                        print "starting parser"
                        try:
                            property['evidence_number'] = request.POST.get('evidence_number')
                            result = process_extract(property)
                            if result is None:
                                print 'Unable to get determine extract properties!'
                        except Exception as e:
                            print "!!!!!!!"
                            print "Error, could not upload", e
                            pass
                 else:
                    print 'Unable to identify file uploaded!' 
            return HttpResponseRedirect('')
        print form
    else:
        form = UploadFileForm()
    return render_to_response('nettop/upload_file.html',  # The web frontend Page for Upload
                              {'form': form},
                              context_instance=RequestContext(request))
Run Code Online (Sandbox Code Playgroud)

knb*_*nbk 5

Django中最基本有效的优化是减少对数据库的查询次数.这对于100个查询来说都是如此,对于500.000个查询来说,这肯定是正确的.

MastTable.objects.create()您应该构建一个未保存的模型实例列表,而不是使用它,并使用MastTable.objects.bulk_create(list_of_models)尽可能少的往返数据库创建它们.这应该会极大地加快速度.

如果您使用的是MySQL,则可以增加max_allowed_packet设置以允许更大批量.它的默认值为1MB非常低.PostGRESQL没有硬编码限制.如果您仍遇到性能问题,可以切换到原始SQL语句.创建500.000个python对象可能有点开销.在我最近的一个测试中,执行完全相同的查询的connection.cursor速度提高了约20%.

使用例如Celery将文件的实际处理留给后台进程或使用a StreamingHttpResponse在过程中提供反馈是一个好主意.