下面是我想要帮助的代码.我不得不运行超过1,300,000行,这意味着插入~300,000行需要40分钟.
我认为批量插入是加速它的路线?或者是因为我通过for data in reader:部分迭代行?
#Opens the prepped csv file
with open (os.path.join(newpath,outfile), 'r') as f:
#hooks csv reader to file
reader = csv.reader(f)
#pulls out the columns (which match the SQL table)
columns = next(reader)
#trims any extra spaces
columns = [x.strip(' ') for x in columns]
#starts SQL statement
query = 'bulk insert into SpikeData123({0}) values ({1})'
#puts column names in SQL query 'query'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
print 'Query is: %s' …Run Code Online (Sandbox Code Playgroud)