如何在NLTK中使用混淆矩阵模块?

use*_*418 3 python nlp nltk

我使用混淆矩阵跟随NLTK书,但confusionmatrix看起来很奇怪.

#empirically exam where tagger is making mistakes
test_tags = [tag for sent in brown.sents(categories='editorial')
    for (word, tag) in t2.tag(sent)]
gold_tags = [tag for (word, tag) in brown.tagged_words(categories='editorial')]
print nltk.ConfusionMatrix(gold_tags, test_tags)
Run Code Online (Sandbox Code Playgroud)

任何人都可以解释如何使用混淆矩阵?

alv*_*vas 14

首先,我假设你从旧NLTK的第05章获得了代码:https://nltk.googlecode.com/svn/trunk/doc/book/ch05.py,特别是你看看这一节:http:/ /pastebin.com/EC8fFqLU

现在,让我们看一下混淆矩阵NLTK,试试:

from nltk.metrics import ConfusionMatrix
ref  = 'DET NN VB DET JJ NN NN IN DET NN'.split()
tagged = 'DET VB VB DET NN NN NN IN DET NN'.split()
cm = ConfusionMatrix(ref, tagged)
print cm
Run Code Online (Sandbox Code Playgroud)

[OUT]:

    | D         |
    | E I J N V |
    | T N J N B |
----+-----------+
DET |<3>. . . . |
 IN | .<1>. . . |
 JJ | . .<.>1 . |
 NN | . . .<3>1 |
 VB | . . . .<1>|
----+-----------+
(row = reference; col = test)
Run Code Online (Sandbox Code Playgroud)

嵌入的数字<>是真正的积极因素(tp).从上面的示例中,您可以看到其中一个JJfrom引用被错误地标记为NN来自标记输出.对于那个例子,它被视为一个假阳性NN和一个假阴性JJ.

要访问混淆矩阵(用于计算精度/召回/ fscore),您可以通过以下方式访问假阴性,误报和真阳性:

labels = set('DET NN VB IN JJ'.split())

true_positives = Counter()
false_negatives = Counter()
false_positives = Counter()

for i in labels:
    for j in labels:
        if i == j:
            true_positives[i] += cm[i,j]
        else:
            false_negatives[i] += cm[i,j]
            false_positives[j] += cm[i,j]

print "TP:", sum(true_positives.values()), true_positives
print "FN:", sum(false_negatives.values()), false_negatives
print "FP:", sum(false_positives.values()), false_positives
Run Code Online (Sandbox Code Playgroud)

[OUT]:

TP: 8 Counter({'DET': 3, 'NN': 3, 'VB': 1, 'IN': 1, 'JJ': 0})
FN: 2 Counter({'NN': 1, 'JJ': 1, 'VB': 0, 'DET': 0, 'IN': 0})
FP: 2 Counter({'VB': 1, 'NN': 1, 'DET': 0, 'JJ': 0, 'IN': 0})
Run Code Online (Sandbox Code Playgroud)

要计算每个标签的Fscore:

for i in sorted(labels):
    if true_positives[i] == 0:
        fscore = 0
    else:
        precision = true_positives[i] / float(true_positives[i]+false_positives[i])
        recall = true_positives[i] / float(true_positives[i]+false_negatives[i])
        fscore = 2 * (precision * recall) / float(precision + recall)
    print i, fscore
Run Code Online (Sandbox Code Playgroud)

[OUT]:

DET 1.0
IN 1.0
JJ 0
NN 0.75
VB 0.666666666667
Run Code Online (Sandbox Code Playgroud)

我希望上面的内容能够消除混淆矩阵的使用NLTK,这里是上面例子的完整代码:

from collections import Counter
from nltk.metrics import ConfusionMatrix

ref  = 'DET NN VB DET JJ NN NN IN DET NN'.split()
tagged = 'DET VB VB DET NN NN NN IN DET NN'.split()
cm = ConfusionMatrix(ref, tagged)

print cm

labels = set('DET NN VB IN JJ'.split())

true_positives = Counter()
false_negatives = Counter()
false_positives = Counter()

for i in labels:
    for j in labels:
        if i == j:
            true_positives[i] += cm[i,j]
        else:
            false_negatives[i] += cm[i,j]
            false_positives[j] += cm[i,j]

print "TP:", sum(true_positives.values()), true_positives
print "FN:", sum(false_negatives.values()), false_negatives
print "FP:", sum(false_positives.values()), false_positives
print 

for i in sorted(labels):
    if true_positives[i] == 0:
        fscore = 0
    else:
        precision = true_positives[i] / float(true_positives[i]+false_positives[i])
        recall = true_positives[i] / float(true_positives[i]+false_negatives[i])
        fscore = 2 * (precision * recall) / float(precision + recall)
    print i, fscore
Run Code Online (Sandbox Code Playgroud)