如何加快对数百万行的多个 str.contains 搜索?

SCo*_*ool 5 python regex pandas

我有一个我试图标准化的商店名称数据框。在这里测试的小样本:

import pandas as pd

df = pd.DataFrame({'store': pd.Series(['McDonalds', 'Lidls', 'Lidl New York 123', 'KFC', 'Lidi Berlin', 'Wallmart LA 90210', 'Aldi', 'London Lidl', 'Aldi627', 'mcdonaldsabc123', 'Mcdonald_s', 'McDonalds12345', 'McDonalds5555', 'McDonalds888', 'Aldi123', 'KFC-786', 'KFC-908', 'McDonalds511', 'GerALDInes Shop'],dtype='object',index=pd.RangeIndex(start=0, stop=19, step=1)), 'standard': pd.Series([pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan],dtype='float64',index=pd.RangeIndex(start=0, stop=19, step=1))}, index=pd.RangeIndex(start=0, stop=19, step=1))

                store  standard
0           McDonalds       NaN
1               Lidls       NaN
2   Lidl New York 123       NaN
3                 KFC       NaN
4         Lidi Berlin       NaN
5   Wallmart LA 90210       NaN
6                Aldi       NaN
7         London Lidl       NaN
8             Aldi627       NaN
9     mcdonaldsabc123       NaN
10         Mcdonald_s       NaN
11     McDonalds12345       NaN
12      McDonalds5555       NaN
13       McDonalds888       NaN
14            Aldi123       NaN
15            KFC-786       NaN
16            KFC-908       NaN
17       McDonalds511       NaN
18    GerALDInes Shop       NaN
Run Code Online (Sandbox Code Playgroud)

我设置了一个正则表达式字典来搜索一个字符串,并将商店名称的标准化版本插入到列中standard。这适用于这个小数据框:

# set up the dictionary
regex_dict = {
 "McDonalds": r'(mcdonalds|mcdonald_s)',
 "Lidl" : r'(lidl|lidi)',
 "Wallmart":r'wallmart',
 "KFC": r'KFC',
 "Aldi":r'(\baldi\b|\baldi\d+)'
}

# loop through dictionary, using str.replace 
for regname, regex_formula in regex_dict.items(): 

    df.loc[df['store'].str.contains(regex_formula,na=False,flags=re.I), 'standard'] = regname

print(df)

                store   standard
0           McDonalds  McDonalds
1               Lidls       Lidl
2   Lidl New York 123       Lidl
3                 KFC        KFC
4         Lidi Berlin       Lidl
5   Wallmart LA 90210   Wallmart
6                Aldi       Aldi
7         London Lidl       Lidl
8             Aldi627       Aldi
9     mcdonaldsabc123  McDonalds
10         Mcdonald_s  McDonalds
11     McDonalds12345  McDonalds
12      McDonalds5555  McDonalds
13       McDonalds888  McDonalds
14            Aldi123       Aldi
15            KFC-786        KFC
16            KFC-908        KFC
17       McDonalds511  McDonalds
18    GerALDInes Shop        NaN
Run Code Online (Sandbox Code Playgroud)

问题是我有大约六百万行要标准化,正则表达式字典比这里显示的要大得多。(许多不同的商店名称,有一些拼写错误等)

我想这样做是在每个循环,只能使用str.contains对已行没有被标准化,而忽略已经标准化的行。这个想法是减少每个循环的搜索空间,从而减少整体处理时间。

我已经按standard列测试了索引,只str.containsstandardis 的行上执行Nan,但它不会导致任何真正的加速。Nan在应用之前确定哪些行仍然需要时间str.contains

这是我试图减少每个循环的处理时间的方法:

for regname, regex_formula in regex_dict.items(): 

    # only apply str.contains to rows where standard == NAN
    df.loc[df['standard'].isnull() & df['store'].str.contains(regex_formula,na=False,flags=re.I), 'standard'] = regname
Run Code Online (Sandbox Code Playgroud)

这有效..但是在我的全部 600 万行上使用它对速度没有真正的影响。

甚至有可能在 600 万行的数据帧上加快速度吗?

SCo*_*ool 2

我使用它成功地将所需时间减少了 40%。我能做的最好的

我创建一个空数据帧,调用它fixed_df来附加新的标准化行,然后在每个循环结束时删除原始数据帧中的相同行。随着每个商店的标准化,每个循环的搜索空间都会减少,并且fixed_df每个循环的大小都会增加。最后,fixed_df应该拥有所有原始行,现在已标准化,并且原始 df 应该为空。

# create empty df to store new results
fixed_df = pd.DataFrame()

# loop through dictionary
for regname, regex_formula in regex_dict.items(): 

    # search for regex formula, add standardized name into standard column
    df.loc[df['term_location'].str.contains(regex_formula,na=False,flags=re.I), 'standard'] = regname

    # get index of where names were fixed
    ind = df[df['standard']==regname].index

    # append fixed data to new df
    fixed_df.append(df[df.index.isin(ind)].copy())

    # remove processed stuff from original df
    df = df[~df.index.isin(ind)].copy()
Run Code Online (Sandbox Code Playgroud)