use*_*650 1 python csv optimization dataframe pandas
我需要处理一个包含300万行和7列的大型csv文件.DataFrame的形状:(3421083,7)
我的计划是删除包含特定值的所有行(客户ID)以下是我的处理方式:
#keep track of iterations
track = 0
#import all transactions (orders.csv)
transactions = pd.read_csv('transactions.csv')
#We select all orders that are electronics orders and put them into a df
is_electronics = transactions[transactions.type == "electronics"]
#Create arrays that will store users to destroy in transactions.csv
users_to_remove = []
#iterate to add appropriate values:
# we add all users that ordered electronics to a list
for user in is_electronics.user_id:
users_to_remove.append(user)
#We delete from orders.csv
for user in users_to_remove:
transactions = transactions[transactions.user_id != user]
track += 1
if track == 100000:
print(track)
track = 0
transactions.to_csv('not_electronics.csv', index = False)
Run Code Online (Sandbox Code Playgroud)
这个操作需要很长时间才能运行它1小时后它仍然没有完成.
我有一个四核桌面i5,3.2 ghz和8 GB的ram.但在活动监视器中,计算机仅使用5 gbs的ram和40%的cpu.
有没有办法加快这个过程的计算?通过更改代码或使用其他库?
我也有一个gpu(gtx 970)我可以用它来处理这个吗?
谢谢.
使用 isin
is_electronics = transactions.type == 'electronics'
users_to_remove = transactions.loc[is_electronics, 'user_id'].unique()
transactions[~transactions.user_id.isin(users_to_remove)]
Run Code Online (Sandbox Code Playgroud)
删除之前的建议,这是安全的
对后人来说,这是@ DSM的建议
is_electronics = transactions.type.values == 'electronics'
users = transactions.user_id.values
transactions[~np.in1d(users, users[is_electronics])]
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
191 次 |
| 最近记录: |