目标
我从hotmail下载了一个CSV文件,但它有很多重复项.这些副本是完整的副本,我不知道为什么我的手机创建它们.
我想摆脱重复.
途径
编写一个python脚本来删除重复项.
技术规格
Windows XP SP 3 Python 2.7 CSV file with 400 contacts
jam*_*lak 50
更新:2016年
如果您乐意使用有用的more_itertools外部库:
from more_itertools import unique_everseen
with open('1.csv','r') as f, open('2.csv','w') as out_file:
out_file.writelines(unique_everseen(f))
Run Code Online (Sandbox Code Playgroud)
@ IcyFlame解决方案的更高效版本
with open('1.csv','r') as in_file, open('2.csv','w') as out_file:
seen = set() # set for fast O(1) amortized lookup
for line in in_file:
if line in seen: continue # skip duplicate
seen.add(line)
out_file.write(line)
Run Code Online (Sandbox Code Playgroud)
要就地编辑同一个文件,您可以使用它
import fileinput
seen = set() # set for fast O(1) amortized lookup
for line in fileinput.FileInput('1.csv', inplace=1):
if line in seen: continue # skip duplicate
seen.add(line)
print line, # standard output is now redirected to the file
Run Code Online (Sandbox Code Playgroud)
And*_*ura 18
您可以使用Pandas有效地实现重复数据删除:
import pandas as pd
file_name = "my_file_with_dupes.csv"
file_name_output = "my_file_without_dupes.csv"
df = pd.read_csv(file_name, sep="\t or ,")
# Notes:
# - the `subset=None` means that every column is used
# to determine if two rows are different; to change that specify
# the columns as an array
# - the `inplace=True` means that the data structure is changed and
# the duplicate rows are gone
df.drop_duplicates(subset=None, inplace=True)
# Write the results to a different file
df.to_csv(file_name_output)
Run Code Online (Sandbox Code Playgroud)
您可以使用以下脚本:
前提:
1.csv 是包含重复项的文件2.csv 是执行此脚本后将缺少重复项的输出文件.码
inFile = open('1.csv','r')
outFile = open('2.csv','w')
listLines = []
for line in inFile:
if line in listLines:
continue
else:
outFile.write(line)
listLines.append(line)
outFile.close()
inFile.close()
算法解释
在这里,我正在做的是:
| 归档时间: |
|
| 查看次数: |
44864 次 |
| 最近记录: |