Via*_*dov 66 python group-by mode pandas pandas-groupby
我有一个包含三个字符串列的数据框.我知道第3列中唯一的一个值对前两个的每个组合都有效.要清理数据,我必须按数据框前两列进行分组,并为每个组合选择第三列的最常见值.
我的代码:
import pandas as pd
from scipy import stats
source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'],
'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
'Short name' : ['NY','New','Spb','NY']})
print source.groupby(['Country','City']).agg(lambda x: stats.mode(x['Short name'])[0])
Run Code Online (Sandbox Code Playgroud)
最后一行代码不起作用,它说"键错误'短名称'",如果我尝试仅按城市分组,那么我得到一个AssertionError.我该怎么办呢?
HYR*_*YRY 107
您可以使用value_counts()
获取计数系列,并获得第一行:
import pandas as pd
source = pd.DataFrame({'Country' : ['USA', 'USA', 'Russia','USA'],
'City' : ['New-York', 'New-York', 'Sankt-Petersburg', 'New-York'],
'Short name' : ['NY','New','Spb','NY']})
source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])
Run Code Online (Sandbox Code Playgroud)
cs9*_*s95 56
(已更新)
pd.Series.mode
可用。使用groupby
,GroupBy.agg
并将pd.Series.mode
功能应用于每个组:
source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)
Country City
Russia Sankt-Petersburg Spb
USA New-York NY
Name: Short name, dtype: object
Run Code Online (Sandbox Code Playgroud)
如果需要将此作为DataFrame,请使用
source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode).to_frame()
Short name
Country City
Russia Sankt-Petersburg Spb
USA New-York NY
Run Code Online (Sandbox Code Playgroud)
有用的Series.mode
是,它总是返回一个Series,使其与agg
和非常兼容apply
,尤其是在重构groupby输出时。它也更快。
# Accepted answer.
%timeit source.groupby(['Country','City']).agg(lambda x:x.value_counts().index[0])
# Proposed in this post.
%timeit source.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)
5.56 ms ± 343 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.76 ms ± 387 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Run Code Online (Sandbox Code Playgroud)
Series.mode
当有多种模式时,也可以很好地完成工作:
source2 = source.append(
pd.Series({'Country': 'USA', 'City': 'New-York', 'Short name': 'New'}),
ignore_index=True)
# Now `source2` has two modes for the
# ("USA", "New-York") group, they are "NY" and "New".
source2
Country City Short name
0 USA New-York NY
1 USA New-York New
2 Russia Sankt-Petersburg Spb
3 USA New-York NY
4 USA New-York New
Run Code Online (Sandbox Code Playgroud)
source2.groupby(['Country','City'])['Short name'].agg(pd.Series.mode)
Country City
Russia Sankt-Petersburg Spb
USA New-York [NY, New]
Name: Short name, dtype: object
Run Code Online (Sandbox Code Playgroud)
或者,如果您想要每种模式单独一行,则可以使用GroupBy.apply
:
source2.groupby(['Country','City'])['Short name'].apply(pd.Series.mode)
Country City
Russia Sankt-Petersburg 0 Spb
USA New-York 0 NY
1 New
Name: Short name, dtype: object
Run Code Online (Sandbox Code Playgroud)
如果您不在乎返回哪种模式,只要它是其中之一,就需要一个lambda来调用mode
并提取第一个结果。
source2.groupby(['Country','City'])['Short name'].agg(
lambda x: pd.Series.mode(x)[0])
Country City
Russia Sankt-Petersburg Spb
USA New-York NY
Name: Short name, dtype: object
Run Code Online (Sandbox Code Playgroud)
您也可以statistics.mode
从python 使用,但是...
source.groupby(['Country','City'])['Short name'].apply(statistics.mode)
Country City
Russia Sankt-Petersburg Spb
USA New-York NY
Name: Short name, dtype: object
Run Code Online (Sandbox Code Playgroud)
...在处理多种模式时效果不佳;一个StatisticsError
提高。在文档中提到了这一点:
如果数据为空,或者没有一个最常用的值,则会引发StatisticsError。
但是你可以自己看...
statistics.mode([1, 2])
# ---------------------------------------------------------------------------
# StatisticsError Traceback (most recent call last)
# ...
# StatisticsError: no unique mode; found 2 equally common values
Run Code Online (Sandbox Code Playgroud)
eum*_*iro 16
因为agg
,lambba函数得到a Series
,它没有'Short name'
属性.
stats.mode
返回两个数组的元组,因此您必须在此元组中获取第一个数组的第一个元素.
通过以下两个简单的更改:
source.groupby(['Country','City']).agg(lambda x: stats.mode(x)[0][0])
Run Code Online (Sandbox Code Playgroud)
回报
Short name
Country City
Russia Sankt-Petersburg Spb
USA New-York NY
Run Code Online (Sandbox Code Playgroud)
abw*_*333 12
这里的游戏有点晚了,但我在使用HYRY的解决方案遇到了一些性能问题,所以我不得不想出另一个.
它的工作原理是找到每个键值的频率,然后,对于每个键,只保留最常出现的值.
还有一个支持多种模式的附加解决方案.
在一个代表我正在使用的数据的比例测试中,这将运行时间从37.4s减少到0.5s!
以下是解决方案的代码,一些示例用法和比例测试:
import numpy as np
import pandas as pd
import random
import time
test_input = pd.DataFrame(columns=[ 'key', 'value'],
data= [[ 1, 'A' ],
[ 1, 'B' ],
[ 1, 'B' ],
[ 1, np.nan ],
[ 2, np.nan ],
[ 3, 'C' ],
[ 3, 'C' ],
[ 3, 'D' ],
[ 3, 'D' ]])
def mode(df, key_cols, value_col, count_col):
'''
Pandas does not provide a `mode` aggregation function
for its `GroupBy` objects. This function is meant to fill
that gap, though the semantics are not exactly the same.
The input is a DataFrame with the columns `key_cols`
that you would like to group on, and the column
`value_col` for which you would like to obtain the mode.
The output is a DataFrame with a record per group that has at least one mode
(null values are not counted). The `key_cols` are included as columns, `value_col`
contains a mode (ties are broken arbitrarily and deterministically) for each
group, and `count_col` indicates how many times each mode appeared in its group.
'''
return df.groupby(key_cols + [value_col]).size() \
.to_frame(count_col).reset_index() \
.sort_values(count_col, ascending=False) \
.drop_duplicates(subset=key_cols)
def modes(df, key_cols, value_col, count_col):
'''
Pandas does not provide a `mode` aggregation function
for its `GroupBy` objects. This function is meant to fill
that gap, though the semantics are not exactly the same.
The input is a DataFrame with the columns `key_cols`
that you would like to group on, and the column
`value_col` for which you would like to obtain the modes.
The output is a DataFrame with a record per group that has at least
one mode (null values are not counted). The `key_cols` are included as
columns, `value_col` contains lists indicating the modes for each group,
and `count_col` indicates how many times each mode appeared in its group.
'''
return df.groupby(key_cols + [value_col]).size() \
.to_frame(count_col).reset_index() \
.groupby(key_cols + [count_col])[value_col].unique() \
.to_frame().reset_index() \
.sort_values(count_col, ascending=False) \
.drop_duplicates(subset=key_cols)
print test_input
print mode(test_input, ['key'], 'value', 'count')
print modes(test_input, ['key'], 'value', 'count')
scale_test_data = [[random.randint(1, 100000),
str(random.randint(123456789001, 123456789100))] for i in range(1000000)]
scale_test_input = pd.DataFrame(columns=['key', 'value'],
data=scale_test_data)
start = time.time()
mode(scale_test_input, ['key'], 'value', 'count')
print time.time() - start
start = time.time()
modes(scale_test_input, ['key'], 'value', 'count')
print time.time() - start
start = time.time()
scale_test_input.groupby(['key']).agg(lambda x: x.value_counts().index[0])
print time.time() - start
Run Code Online (Sandbox Code Playgroud)
运行此代码将打印如下内容:
key value
0 1 A
1 1 B
2 1 B
3 1 NaN
4 2 NaN
5 3 C
6 3 C
7 3 D
8 3 D
key value count
1 1 B 2
2 3 C 2
key count value
1 1 2 [B]
2 3 2 [C, D]
0.489614009857
9.19386196136
37.4375009537
Run Code Online (Sandbox Code Playgroud)
希望这可以帮助!
这里的两个顶级答案建议:
df.groupby(cols).agg(lambda x:x.value_counts().index[0])
Run Code Online (Sandbox Code Playgroud)
或者,最好
df.groupby(cols).agg(pd.Series.mode)
Run Code Online (Sandbox Code Playgroud)
然而,这两种方法都在简单的边缘情况下失败,如下所示:
df = pd.DataFrame({
'client_id':['A', 'A', 'A', 'A', 'B', 'B', 'B', 'C'],
'date':['2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01'],
'location':['NY', 'NY', 'LA', 'LA', 'DC', 'DC', 'LA', np.NaN]
})
Run Code Online (Sandbox Code Playgroud)
首先:
df.groupby(['client_id', 'date']).agg(lambda x:x.value_counts().index[0])
Run Code Online (Sandbox Code Playgroud)
产量IndexError
(因为 group 返回的空系列C
)。第二:
df.groupby(['client_id', 'date']).agg(pd.Series.mode)
Run Code Online (Sandbox Code Playgroud)
返回ValueError: Function does not reduce
,因为第一组返回两个列表(因为有两种模式)。(如记录在这里,如果第一批返回的单一模式,这会工作!)
对于这种情况,两种可能的解决方案是:
import scipy
x.groupby(['client_id', 'date']).agg(lambda x: scipy.stats.mode(x)[0])
Run Code Online (Sandbox Code Playgroud)
以及 cs95 在此处的评论中给我的解决方案:
def foo(x):
m = pd.Series.mode(x);
return m.values[0] if not m.empty else np.nan
df.groupby(['client_id', 'date']).agg(foo)
Run Code Online (Sandbox Code Playgroud)
但是,所有这些都很慢,不适合大型数据集。我最终使用的解决方案 a) 可以处理这些情况并且 b) 快得多,是 abw33 答案的轻微修改版本(应该更高):
def get_mode_per_column(dataframe, group_cols, col):
return (dataframe.fillna(-1) # NaN placeholder to keep group
.groupby(group_cols + [col])
.size()
.to_frame('count')
.reset_index()
.sort_values('count', ascending=False)
.drop_duplicates(subset=group_cols)
.drop(columns=['count'])
.sort_values(group_cols)
.replace(-1, np.NaN)) # restore NaNs
group_cols = ['client_id', 'date']
non_grp_cols = list(set(df).difference(group_cols))
output_df = get_mode_per_column(df, group_cols, non_grp_cols[0]).set_index(group_cols)
for col in non_grp_cols[1:]:
output_df[col] = get_mode_per_column(df, group_cols, col)[col].values
Run Code Online (Sandbox Code Playgroud)
本质上,该方法一次作用于一个 col 并输出一个 df,因此concat
您将第一个视为 df,然后迭代地将输出数组 ( values.flatten()
)添加为 df 中的列,而不是密集的 。
DataFrame.value_counts
快速解决方案前 3 个答案在这里:
\nsource.groupby([\'Country\',\'City\'])[\'Short name\'].agg(pd.Series.mode)
source.groupby([\'Country\',\'City\']).agg(lambda x:x.value_counts().index[0])
source.groupby([\'Country\',\'City\']).agg(lambda x: stats.mode(x)[0])
对于大型数据集来说速度非常慢。
\n使用解决方案collections.Counter
要快得多(比前 3 种方法快 20-40 倍)
source.groupby([\'Country\', \'City\'])[\'Short name\'].agg(lambda srs: Counter(list(srs)).most_common(1)[0][0])
但仍然很慢。
\nabw333 和 Josh Friedlander 的解决方案要快得多(比使用 的方法快大约 10 倍Counter
)。这些解决方案可以通过使用来进一步优化value_counts
(DataFrame.value_counts
从 pandas 1.1.0 开始可用)。
source.value_counts([\'Country\', \'City\', \'Short name\']).pipe(lambda x: x[~x.droplevel(\'Short name\').index.duplicated()]).reset_index(name=\'Count\')\n
Run Code Online (Sandbox Code Playgroud)\n要使函数像 Josh Friedlander 的函数一样考虑 NaN,只需关闭dropna
参数即可:
source.value_counts([\'Country\', \'City\', \'Short name\'], dropna=False).pipe(lambda x: x[~x.droplevel(\'Short name\').index.duplicated()]).reset_index(name=\'Count\')\n
Run Code Online (Sandbox Code Playgroud)\n使用 abw333 的设置,如果我们测试运行时差异,对于具有 100 万行的 DataFrame,value_counts
比 abw333 的解决方案快约 10%。
scale_test_data = [[random.randint(1, 100),\n str(random.randint(100, 900)), \n str(random.randint(0,2))] for i in range(1000000)]\nsource = pd.DataFrame(data=scale_test_data, columns=[\'Country\', \'City\', \'Short name\'])\nkeys = [\'Country\', \'City\']\nvals = [\'Short name\']\n\n%timeit source.value_counts(keys+vals).pipe(lambda x: x[~x.droplevel(vals).index.duplicated()]).reset_index(name=\'Count\')\n# 376 ms \xc2\xb1 3.42 ms per loop (mean \xc2\xb1 std. dev. of 7 runs, 100 loops each)\n\n%timeit mode(source, [\'Country\', \'City\'], \'Short name\', \'Count\')\n# 415 ms \xc2\xb1 1.08 ms per loop (mean \xc2\xb1 std. dev. of 7 runs, 100 loops each)\n
Run Code Online (Sandbox Code Playgroud)\n为了方便使用,我将此解决方案包装在一个函数中,您可以轻松复制粘贴并在您自己的环境中使用。该功能还可以查找多列的组模式。
\nscale_test_data = [[random.randint(1, 100),\n str(random.randint(100, 900)), \n str(random.randint(0,2))] for i in range(1000000)]\nsource = pd.DataFrame(data=scale_test_data, columns=[\'Country\', \'City\', \'Short name\'])\nkeys = [\'Country\', \'City\']\nvals = [\'Short name\']\n\n%timeit source.value_counts(keys+vals).pipe(lambda x: x[~x.droplevel(vals).index.duplicated()]).reset_index(name=\'Count\')\n# 376 ms \xc2\xb1 3.42 ms per loop (mean \xc2\xb1 std. dev. of 7 runs, 100 loops each)\n\n%timeit mode(source, [\'Country\', \'City\'], \'Short name\', \'Count\')\n# 415 ms \xc2\xb1 1.08 ms per loop (mean \xc2\xb1 std. dev. of 7 runs, 100 loops each)\n
Run Code Online (Sandbox Code Playgroud)\n
正式地,正确答案是@eumiro 解决方案。@HYRY 解决方案的问题是,当您有像 [1,2,3,4] 这样的数字序列时,解决方案是错误的,即您没有mode。例子:
>>> import pandas as pd
>>> df = pd.DataFrame(
{
'client': ['A', 'B', 'A', 'B', 'B', 'C', 'A', 'D', 'D', 'E', 'E', 'E', 'E', 'E', 'A'],
'total': [1, 4, 3, 2, 4, 1, 2, 3, 5, 1, 2, 2, 2, 3, 4],
'bla': [10, 40, 30, 20, 40, 10, 20, 30, 50, 10, 20, 20, 20, 30, 40]
}
)
Run Code Online (Sandbox Code Playgroud)
如果你像@HYRY一样计算,你会得到:
>>> print(df.groupby(['client']).agg(lambda x: x.value_counts().index[0]))
total bla
client
A 4 30
B 4 40
C 1 10
D 3 30
E 2 20
Run Code Online (Sandbox Code Playgroud)
这显然是错误的(请参阅应该是1而不是4的A值),因为它无法处理唯一值。
因此,另一个解决方案是正确的:
>>> import scipy.stats
>>> print(df.groupby(['client']).agg(lambda x: scipy.stats.mode(x)[0][0]))
total bla
client
A 1 10
B 4 40
C 1 10
D 3 30
E 2 20
Run Code Online (Sandbox Code Playgroud)
如果您不想包含 NaN 值,则使用比orCounter
快得多:pd.Series.mode
pd.Series.value_counts()[0]
def get_most_common(srs):
x = list(srs)
my_counter = Counter(x)
return my_counter.most_common(1)[0][0]
df.groupby(col).agg(get_most_common)
Run Code Online (Sandbox Code Playgroud)
应该管用。当您有 NaN 值时,这将失败,因为每个 NaN 将单独计算。
归档时间: |
|
查看次数: |
64655 次 |
最近记录: |