ZaM*_*ZaM 8 nlp classification nltk python-2.7 text-classification
以下代码运行Naive Bayes电影评论分类器.该代码生成一个信息最丰富的功能列表.
注意: **movie review**文件夹在nltk.
from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
stop = stopwords.words('english')
documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
classifier = NaiveBayesClassifier.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)
Run Code Online (Sandbox Code Playgroud)
如何在特定文件上测试分类器?
如果我的问题含糊不清,请告诉我.
首先,仔细阅读这些答案,它们包含您需要的部分答案,并简要说明分类器的作用以及它在NLTK中的工作原理:
在带注释的数据上测试分类器
现在回答你的问题.我们假设您的问题是这个问题的后续问题:在NLTK中使用我自己的语料库而不是movie_reviews语料库进行分类
如果您的测试文本的结构与movie_review语料库的结构相同,那么您可以像处理培训数据一样简单地读取测试数据:
如果代码的解释不清楚,这是一个演练:
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
Run Code Online (Sandbox Code Playgroud)
上面两行是读取my_movie_reviews具有这种结构的目录:
\my_movie_reviews
\pos
123.txt
234.txt
\neg
456.txt
789.txt
README
Run Code Online (Sandbox Code Playgroud)
然后下一行提取文档,其pos/neg标记是目录结构的一部分.
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
Run Code Online (Sandbox Code Playgroud)
以下是上述行的解释:
# This extracts the pos/neg tag
labels = [i for i.split('/')[0]) for i in mr.fileids()]
# Reads the words from the corpus through the CategorizedPlaintextCorpusReader object
words = [w for w in mr.words(i)]
# Removes the stopwords
words = [w for w in mr.words(i) if w.lower() not in stop]
# Removes the punctuation
words = [w for w in mr.words(i) w not in string.punctuation]
# Removes the stopwords and punctuations
words = [w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation]
# Removes the stopwords and punctuations and put them in a tuple with the pos/neg labels
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
Run Code Online (Sandbox Code Playgroud)
读取测试数据时应该应用SAME过程!
现在到功能处理:
以下几行为分类器提供额外的前100个功能:
# Extract the words features and put them into FreqDist
# object which records the no. of times each unique word occurs
word_features = FreqDist(chain(*[i for i,j in documents]))
# Cuts the FreqDist to the top 100 words in terms of their counts.
word_features = word_features.keys()[:100]
Run Code Online (Sandbox Code Playgroud)
接下来将文档处理为可分类格式:
# Splits the training data into training size and testing size
numtrain = int(len(documents) * 90 / 100)
# Process the documents for training data
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
# Process the documents for testing data
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
Run Code Online (Sandbox Code Playgroud)
现在解释那个长列表理解train_set和`test_set:
# Take the first `numtrain` no. of documents
# as training documents
train_docs = documents[:numtrain]
# Takes the rest of the documents as test documents.
test_docs = documents[numtrain:]
# These extract the feature sets for the classifier
# please look at the full explanation on https://stackoverflow.com/questions/20827741/nltk-naivebayesclassifier-training-for-sentiment-analysis/
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in train_docs]
Run Code Online (Sandbox Code Playgroud)
您还需要按照上面的方法处理测试文档中的功能提取!
所以这是你如何阅读测试数据:
stop = stopwords.words('english')
# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
# Now do the same for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
Run Code Online (Sandbox Code Playgroud)
然后继续执行上述处理步骤,只需执行此操作即可获得测试文档的标签,如@yvespeirsman所述:
#### FOR TRAINING DATA ####
stop = stopwords.words('english')
# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
# Extract training features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Assuming that you're using full data set
# since your test set is different.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
#### TRAINS THE TAGGER ####
# Train the tagger
classifier = NaiveBayesClassifier.train(train_set)
#### FOR TESTING DATA ####
# Now do the same reading and processing for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
# Reads test data into features:
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in test_documents]
#### Evaluate the classifier ####
for doc, gold_label in test_set:
tagged_label = classifier.classify(doc)
if tagged_label == gold_label:
print("Woohoo, correct")
else:
print("Boohoo, wrong")
Run Code Online (Sandbox Code Playgroud)
如果上面的代码和解释对你没有意义,那么你必须在继续之前阅读本教程:http://www.nltk.org/howto/classify.html
现在假设您的测试数据中没有注释,即您test.txt不在目录结构中movie_review,只是一个纯文本文件:
\test_movie_reviews
\1.txt
\2.txt
Run Code Online (Sandbox Code Playgroud)
然后将它读入分类语料库是没有意义的,你可以简单地读取和标记文档,即:
for infile in os.listdir(`test_movie_reviews):
for line in open(infile, 'r'):
tagged_label = classifier.classify(doc)
Run Code Online (Sandbox Code Playgroud)
但是如果没有注释,则无法评估结果,因此如果不使用CategorizedPlaintextCorpusReader if-else,则无需检查标记,还需要标记文本.
如果您只想标记纯文本文件test.txt:
import string
from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
from nltk import word_tokenize
stop = stopwords.words('english')
# Extracts the documents.
documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
# Extract the features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Converts documents to features.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
# Train the classifier.
classifier = NaiveBayesClassifier.train(train_set)
# Tag the test file.
with open('test.txt', 'r') as fin:
for test_sentence in fin:
# Tokenize the line.
doc = word_tokenize(test_sentence.lower())
featurized_doc = {i:(i in doc) for i in word_features}
tagged_label = classifier.classify(featurized_doc)
print(tagged_label)
Run Code Online (Sandbox Code Playgroud)
再一次,请不要只是复制并粘贴解决方案,并尝试了解其原因和方法.
| 归档时间: |
|
| 查看次数: |
2396 次 |
| 最近记录: |