gon*_*aao 27 python random group-by pandas pandas-groupby
我知道这肯定已经回答了一些地方,但我找不到它.
问题:在groupby操作后对每个组进行采样.
import pandas as pd
df = pd.DataFrame({'a': [1,2,3,4,5,6,7],
'b': [1,1,1,0,0,0,0]})
grouped = df.groupby('b')
# now sample from each group, e.g., I want 30% of each group
Run Code Online (Sandbox Code Playgroud)
EdC*_*ica 47
应用lambda并sample
使用param 调用frac
:
In [2]:
df = pd.DataFrame({'a': [1,2,3,4,5,6,7],
'b': [1,1,1,0,0,0,0]})
?
grouped = df.groupby('b')
grouped.apply(lambda x: x.sample(frac=0.3))
Out[2]:
a b
b
0 6 7 0
1 2 3 1
Run Code Online (Sandbox Code Playgroud)
cs9*_*s95 11
You can use GroupBy.apply
with sample
. You do not need to use a lambda; apply
accepts keyword arguments:
frac = .3
df.groupby('b').apply(pd.DataFrame.sample, frac=.3)
a b
b
0 6 7 0
1 0 1 1
Run Code Online (Sandbox Code Playgroud)
If the MultiIndex is not required, you may specify group_keys=False
to groupby
:
df.groupby('b', group_keys=False).apply(pd.DataFrame.sample, frac=.3)
a b
6 7 0
2 3 1
Run Code Online (Sandbox Code Playgroud)
N
rows from each groupapply
is slow. If your use case is to sample a fixed number of rows, you can shuffle the DataFrame beforehand, then use GroupBy.head
.
df.sample(frac=1).groupby('b').head(2)
a b
2 3 1
5 6 0
1 2 1
4 5 0
Run Code Online (Sandbox Code Playgroud)
This is the same as df.groupby('b', group_keys=False).apply(pd.DataFrame.sample, n=N)
, but faster:
%%timeit df.groupby('b', group_keys=False).apply(pd.DataFrame.sample, n=2)
# 3.19 ms ± 90.5 µs
%timeit df.sample(frac=1).groupby('b').head(2) # 1.56 ms ± 103 µs
Run Code Online (Sandbox Code Playgroud)