Ben*_*Ben 4 python dataframe pandas
我有这样的数据
location sales store
0 68 583 17
1 28 857 2
2 55 190 59
3 98 517 64
4 94 892 79
...
Run Code Online (Sandbox Code Playgroud)
对于每个唯一对(位置,商店),有1个或更多销售.我想添加一个列,pcnt_sales显示该(位置,商店)对的总销售额占给定行中销售额的百分比.
location sales store pcnt_sales
0 68 583 17 0.254363
1 28 857 2 0.346543
2 55 190 59 1.000000
3 98 517 64 0.272105
4 94 892 79 1.000000
...
Run Code Online (Sandbox Code Playgroud)
这有效,但速度很慢
import pandas as pd
import numpy as np
df = pd.DataFrame({'location':np.random.randint(0, 100, 10000), 'store':np.random.randint(0, 100, 10000), 'sales': np.random.randint(0, 1000, 10000)})
import timeit
start_time = timeit.default_timer()
df['pcnt_sales'] = df.groupby(['location', 'store'])['sales'].apply(lambda x: x/x.sum())
print(timeit.default_timer() - start_time) # 1.46 seconds
Run Code Online (Sandbox Code Playgroud)
相比之下,R's的data.table速度非常快
library(data.table)
dt <- data.table(location=sample(100, size=10000, replace=TRUE), store=sample(100, size=10000, replace=TRUE), sales=sample(1000, size=10000, replace=TRUE))
ptm <- proc.time()
dt[, pcnt_sales:=sales/sum(sales), by=c("location", "store")]
proc.time() - ptm # 0.007 seconds
Run Code Online (Sandbox Code Playgroud)
我如何在Pandas中有效地做到这一点(特别是考虑到我的真实数据集有数百万行)?
对于您想要避免的性能apply.您可以使用transform将groupby的结果扩展为原始索引,此时除法将以矢量化速度工作:
>>> %timeit df['pcnt_sales'] = df.groupby(['location', 'store'])['sales'].apply(lambda x: x/x.sum())
1 loop, best of 3: 2.27 s per loop
>>> %timeit df['pcnt_sales2'] = (df["sales"] /
df.groupby(['location', 'store'])['sales'].transform(sum))
100 loops, best of 3: 6.25 ms per loop
>>> df["pcnt_sales"].equals(df["pcnt_sales2"])
True
Run Code Online (Sandbox Code Playgroud)