为什么在处理DataFrame时我的NLTK功能会变慢?

Ani*_*jee 0 python optimization nltk

我试图在数据集中运行我的百万行的函数.

  1. 我在数据帧中读取CSV中的数据
  2. 我使用drop list删除我不需要的数据
  3. 我在for循环中通过NLTK函数传递它.

码:

def nlkt(val):
    val=repr(val)
    clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
    nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
    nonum = [char for char in nopunc if not char.isdigit()]
    words_string = ''.join(nonum)
    return words_string
Run Code Online (Sandbox Code Playgroud)

现在我使用for循环调用上述函数来运行百万条记录.即使我在24核CPU和88 GB Ram的重量级服务器上,我看到循环花费了太多时间而没有使用那里的计算能力

我这样调用上面的函数

data = pd.read_excel(scrPath + "UserData_Full.xlsx", encoding='utf-8')
droplist = ['Submitter', 'Environment']
data.drop(droplist,axis=1,inplace=True)

#Merging the columns company and detailed description

data['Anylize_Text']= data['Company'].astype(str) + ' ' + data['Detailed_Description'].astype(str)

finallist =[]

for eachlist in data['Anylize_Text']:
    z = nlkt(eachlist)
    finallist.append(z)
Run Code Online (Sandbox Code Playgroud)

当我们有几百万条记录时,上面的代码完全可以正常运行.它只是excel中的一个示例记录,但实际数据将在DB中运行,其数量将达到数亿.有没有什么办法可以加快操作速度以更快地通过函数传递数据 - 使用更多的计算能力?

alv*_*vas 5

您的原始nlkt()循环遍历每行3次.

def nlkt(val):
    val=repr(val)
    clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
    nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
    nonum = [char for char in nopunc if not char.isdigit()]
    words_string = ''.join(nonum)
    return words_string
Run Code Online (Sandbox Code Playgroud)

此外,每次你打电话nlkt(),你都会一次又一次地重新初始化这些.

  • stopwords.words('english')
  • string.punctuation

这些应该是全球性的.

stoplist = stopwords.words('english') + list(string.punctuation)
Run Code Online (Sandbox Code Playgroud)

逐行完成事情:

val=repr(val)
Run Code Online (Sandbox Code Playgroud)

我不确定你为什么需要这样做.但是您可以轻松地将列转换为str类型.这应该在预处理功能之外完成.

希望这是不言自明的:

>>> import pandas as pd
>>> df = pd.DataFrame([[0, 1, 2], [2, 'xyz', 4], [5, 'abc', 'def']])
>>> df
   0    1    2
0  0    1    2
1  2  xyz    4
2  5  abc  def
>>> df[1]
0      1
1    xyz
2    abc
Name: 1, dtype: object
>>> df[1].astype(str)
0      1
1    xyz
2    abc
Name: 1, dtype: object
>>> list(df[1])
[1, 'xyz', 'abc']
>>> list(df[1].astype(str))
['1', 'xyz', 'abc']
Run Code Online (Sandbox Code Playgroud)

现在进入下一行:

clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
Run Code Online (Sandbox Code Playgroud)

使用str.split()很尴尬,你应该使用适当的标记器.否则,您的标点符号可能会卡在前面的单词中,例如

>>> from nltk.corpus import stopwords
>>> from nltk import word_tokenize
>>> import string
>>> stoplist = stopwords.words('english') + list(string.punctuation)
>>> stoplist = set(stoplist)

>>> text = 'This is foo, bar and doh.'

>>> [word for word in text.split() if word.lower() not in stoplist]
['foo,', 'bar', 'doh.']

>>> [word for word in word_tokenize(text) if word.lower() not in stoplist]
['foo', 'bar', 'doh']
Run Code Online (Sandbox Code Playgroud)

还应检查.isdigit()检查:

>>> text = 'This is foo, bar, 234, 567 and doh.'
>>> [word for word in word_tokenize(text) if word.lower() not in stoplist and not word.isdigit()]
['foo', 'bar', 'doh']
Run Code Online (Sandbox Code Playgroud)

将它们放在一起你nlkt()应该看起来像这样:

def preprocess(text):
    return [word for word in word_tokenize(text) if word.lower() not in stoplist and not word.isdigit()]
Run Code Online (Sandbox Code Playgroud)

你可以使用DataFrame.apply:

data['Anylize_Text'].apply(preprocess)
Run Code Online (Sandbox Code Playgroud)