如何在数据框中使用word_tokenize

ecl*_*irs 10 python nltk pandas

我最近开始使用nltk模块进行文本分析.我陷入了困境.我想在数据帧上使用word_tokenize,以便获得数据帧的特定行中使用的所有单词.

data example:
       text
1.   This is a very good site. I will recommend it to others.
2.   Can you please give me a call at 9983938428. have issues with the listings.
3.   good work! keep it up
4.   not a very helpful site in finding home decor. 

expected output:

1.   'This','is','a','very','good','site','.','I','will','recommend','it','to','others','.'
2.   'Can','you','please','give','me','a','call','at','9983938428','.','have','issues','with','the','listings'
3.   'good','work','!','keep','it','up'
4.   'not','a','very','helpful','site','in','finding','home','decor'
Run Code Online (Sandbox Code Playgroud)

基本上,我想分离所有单词并找到数据框中每个文本的长度.

我知道word_tokenize可以用于字符串,但是如何将它应用到整个数据帧?

请帮忙!

提前致谢...

Gre*_*egg 20

您可以使用DataFrame API的apply方法:

import pandas as pd
import nltk

df = pd.DataFrame({'sentences': ['This is a very good site. I will recommend it to others.', 'Can you please give me a call at 9983938428. have issues with the listings.', 'good work! keep it up']})
df['tokenized_sents'] = df.apply(lambda row: nltk.word_tokenize(row['sentences']), axis=1)
Run Code Online (Sandbox Code Playgroud)

输出:

>>> df
                                           sentences  \
0  This is a very good site. I will recommend it ...   
1  Can you please give me a call at 9983938428. h...   
2                              good work! keep it up   

                                     tokenized_sents  
0  [This, is, a, very, good, site, ., I, will, re...  
1  [Can, you, please, give, me, a, call, at, 9983...  
2                      [good, work, !, keep, it, up]
Run Code Online (Sandbox Code Playgroud)

要查找每个文本的长度,请尝试再次使用applylambda函数:

df['sents_length'] = df.apply(lambda row: len(row['tokenized_sents']), axis=1)

>>> df
                                           sentences  \
0  This is a very good site. I will recommend it ...   
1  Can you please give me a call at 9983938428. h...   
2                              good work! keep it up   

                                     tokenized_sents  sents_length  
0  [This, is, a, very, good, site, ., I, will, re...            14  
1  [Can, you, please, give, me, a, call, at, 9983...            15  
2                      [good, work, !, keep, it, up]             6  
Run Code Online (Sandbox Code Playgroud)


Har*_*ath 18

pandas.Series.apply比pandas.DataFrame.apply更快

import pandas as pd
import nltk

df = pd.read_csv("/path/to/file.csv")

start = time.time()
df["unigrams"] = df["verbatim"].apply(nltk.word_tokenize)
print "series.apply", (time.time() - start)

start = time.time()
df["unigrams2"] = df.apply(lambda row: nltk.word_tokenize(row["verbatim"]), axis=1)
print "dataframe.apply", (time.time() - start)
Run Code Online (Sandbox Code Playgroud)

在示例125 MB csv文件上,

series.apply 144.428858995

dataframe.apply 201.884778976

编辑:你可能会想到在series.apply(nltk.word_tokenize)之后的Dataframe df,它可能会影响下一个操作dataframe.apply(nltk.word_tokenize)的运行时.

对于这种情况,熊猫在引擎盖下进行了优化.我通过分别执行dataframe.apply(nltk.word_tokenize)获得了类似的200s运行时间.