Aar*_*yar 1 python csv stemming nltk
所以,我是使用Python和NLTK的新手.我有一个名为reviews.csv的文件,其中包含从亚马逊中提取的注释.我已将此csv文件的内容标记化并将其写入名为csvfile.csv的文件中.这是代码:
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.stem import PorterStemmer
import csv #CommaSpaceVariable
from nltk.corpus import stopwords
ps = PorterStemmer()
stop_words = set(stopwords.words("english"))
with open ('reviews.csv') as csvfile:
readCSV = csv.reader(csvfile,delimiter='.')
for lines in readCSV:
word1 = word_tokenize(str(lines))
print(word1)
with open('csvfile.csv','a') as file:
for word in word1:
file.write(word)
file.write('\n')
with open ('csvfile.csv') as csvfile:
readCSV1 = csv.reader(csvfile)
for w in readCSV1:
if w not in stopwords:
print(w)
Run Code Online (Sandbox Code Playgroud)
我试图在csvfile.csv上执行词干.但我得到这个错误:
Traceback (most recent call last):<br>
File "/home/aarushi/test.py", line 25, in <module> <br>
if w not in stopwords: <br>
TypeError: argument of type 'WordListCorpusReader' is not iterable
Run Code Online (Sandbox Code Playgroud)
alv*_*vas 11
当你做到了
from nltk.corpus import stopwords
Run Code Online (Sandbox Code Playgroud)
stopwords是指向CorpusReader对象的变量nltk.
当您执行以下操作时,实例化您正在寻找的实际停用词(即停用词列表):
stop_words = set(stopwords.words("english"))
Run Code Online (Sandbox Code Playgroud)
因此,在检查令牌列表中的单词是否为停用词时,您应该:
from nltk.corpus import stopwords
stop_words = set(stopwords.words("english"))
for w in tokenized_sent:
if w not in stop_words:
pass # Do something.
Run Code Online (Sandbox Code Playgroud)
为避免混淆,我通常将实际的停用词列表命名为stoplist:
from nltk.corpus import stopwords
stoplist = set(stopwords.words("english"))
for w in tokenized_sent:
if w not in stoplist:
pass # Do something.
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
6263 次 |
| 最近记录: |