tum*_*eed 3 python nlp machine-learning scipy scikit-learn
我有一些大事,让我们说:[('word','word'),('word','word'),...,('word','word')]
.我如何使用scikit HashingVectorizer
创建一个特征向量,随后将呈现给某些分类算法,例如SVC
或Naive Bayes或任何类型的分类算法?
首先,您必须了解不同的矢量化器正在做什么.大多数矢量化器都基于将bag-of-word
文档标记映射到矩阵的方法.
从sklearn文档,CountVectorizer和HashVectorizer:
将文本文档集合转换为令牌计数矩阵
例如,这些句子
富尔顿县大陪审团星期五说,对亚特兰大最近的初选进行的调查没有发现任何违规行为的证据.
陪审团在期末报告中进一步表示,城市执行委员会对选举负有全部责任,"应该得到亚特兰大市的赞赏和感谢",以表达选举的方式.
使用这个粗略的矢量化器:
from collections import Counter
from itertools import chain
from string import punctuation
from nltk.corpus import brown, stopwords
# Let's say the training/testing data is a list of words and POS
sentences = brown.sents()[:2]
# Extract the content words as features, i.e. columns.
vocabulary = list(chain(*sentences))
stops = stopwords.words('english') + list(punctuation)
vocab_nostop = [i.lower() for i in vocabulary if i not in stops]
# Create a matrix from the sentences
matrix = [Counter([w for w in words if w in vocab_nostop]) for words in sentences]
print matrix
Run Code Online (Sandbox Code Playgroud)
会成为:
[Counter({u"''": 1, u'``': 1, u'said': 1, u'took': 1, u'primary': 1, u'evidence': 1, u'produced': 1, u'investigation': 1, u'place': 1, u'election': 1, u'irregularities': 1, u'recent': 1}), Counter({u'the': 6, u'election': 2, u'presentments': 1, u'``': 1, u'said': 1, u'jury': 1, u'conducted': 1, u"''": 1, u'deserves': 1, u'charge': 1, u'over-all': 1, u'praise': 1, u'manner': 1, u'term-end': 1, u'thanks': 1})]
Run Code Online (Sandbox Code Playgroud)
因此,考虑到非常大的数据集,这可能是相当低效的,因此sklearn
开发人员构建了更高效的代码.其中一个最重要的特性sklearn
是,您甚至无需在对其进行矢量化之前将数据集加载到内存中.
由于目前还不清楚你的任务是什么,我认为你正在寻找一般用途.假设你将它用于语言ID.
假设你的训练数据的输入文件是train.txt
:
Pošto je EULEX obe?ao da ?e obaviti istragu o prošlosedmi?nom izbijanju nasilja na sjeveru Kosova, taj incident predstavlja još jedan ispit kapaciteta misije da doprinese ja?anju vladavine prava.
De todas as provações que teve de suplantar ao longo da vida, qual foi a mais difícil? O início. Qualquer começo apresenta dificuldades que parecem intransponíveis. Mas tive sempre a minha mãe do meu lado. Foi ela quem me ajudou a encontrar forças para enfrentar as situações mais decepcionantes, negativas, as que me punham mesmo furiosa.
Al parecer, Andrea Guasch pone que una relación a distancia es muy difícil de llevar como excusa. Algo con lo que, por lo visto, Alex Lequio no está nada de acuerdo. ¿O es que más bien ya ha conseguido la fama que andaba buscando?
Vo vä?šine golfových rezortov ide o ve?ký komplex nieko?kých ihrísk blízko pri sebe spojených s hotelmi a ?alšími možnos?ami trávenia vo?ného ?asu – nie vždy sú manželky ?i deti nadšenými golfistami, a tak potrebujú iný druh vyžitia. Zaujímavé kombinácie ponúkajú aj rakúske, švaj?iarske ?i talianske Alpy, kde sa dá v zime lyžova? a v lete hra? golf pod vysokými alpskými kon?iarmi.
Run Code Online (Sandbox Code Playgroud)
您的相应标签是波斯尼亚语,葡萄牙语,西班牙语和斯洛伐克语,即
[bs,pt,es,sr]
Run Code Online (Sandbox Code Playgroud)
这是使用CountVectorizer
和朴素贝叶斯分类器的一种方法.下面的例子是从https://github.com/alvations/bayesline所述的DSL共享任务.
让我们从矢量化器开始.首先,矢量化器获取输入文件,然后将训练集转换为矢量化矩阵并初始化矢量化器(即特征):
import codecs
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
trainfile = 'train.txt'
testfile = 'test.txt'
# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']
print word_vectorizer.get_feature_names()
Run Code Online (Sandbox Code Playgroud)
[OUT]:
[u'acuerdo', u'aj', u'ajudou', u'al', u'alex', u'algo', u'alpsk\xfdmi', u'alpy', u'andaba', u'andrea', u'ao', u'apresenta', u'as', u'bien', u'bl\xedzko', u'buscando', u'come\xe7o', u'como', u'con', u'conseguido', u'da', u'de', u'decepcionantes', u'deti', u'dificuldades', u'dif\xedcil', u'distancia', u'do', u'doprinese', u'druh', u'd\xe1', u'ela', u'encontrar', u'enfrentar', u'es', u'est\xe1', u'eulex', u'excusa', u'fama', u'foi', u'for\xe7as', u'furiosa', u'golf', u'golfistami', u'golfov\xfdch', u'guasch', u'ha', u'hotelmi', u'hra\u0165', u'ide', u'ihr\xedsk', u'incident', u'intranspon\xedveis', u'in\xedcio', u'in\xfd', u'ispit', u'istragu', u'izbijanju', u'ja\u010danju', u'je', u'jedan', u'jo\u0161', u'kapaciteta', u'kde', u'kombin\xe1cie', u'komplex', u'kon\u010diarmi', u'kosova', u'la', u'lado', u'lequio', u'lete', u'llevar', u'lo', u'longo', u'ly\u017eova\u0165', u'mais', u'man\u017eelky', u'mas', u'me', u'mesmo', u'meu', u'minha', u'misije', u'mo\u017enos\u0165ami', u'muy', u'm\xe1s', u'm\xe3e', u'na', u'nada', u'nad\u0161en\xfdmi', u'nasilja', u'negativas', u'nie', u'nieko\u013ek\xfdch', u'no', u'obaviti', u'obe\u0107ao', u'para', u'parecem', u'parecer', u'pod', u'pone', u'pon\xfakaj\xfa', u'por', u'potrebuj\xfa', u'po\u0161to', u'prava', u'predstavlja', u'pri', u'prova\xe7\xf5es', u'pro\u0161losedmi\u010dnom', u'punham', u'qual', u'qualquer', u'que', u'quem', u'rak\xfaske', u'relaci\xf3n', u'rezortov', u'sa', u'sebe', u'sempre', u'situa\xe7\xf5es', u'sjeveru', u'spojen\xfdch', u'suplantar', u's\xfa', u'taj', u'tak', u'talianske', u'teve', u'tive', u'todas', u'tr\xe1venia', u'una', u've\u013ek\xfd', u'vida', u'visto', u'vladavine', u'vo', u'vo\u013en\xe9ho', u'vysok\xfdmi', u'vy\u017eitia', u'v\xe4\u010d\u0161ine', u'v\u017edy', u'ya', u'zauj\xedmav\xe9', u'zime', u'\u0107e', u'\u010dasu', u'\u010di', u'\u010fal\u0161\xedmi', u'\u0161vaj\u010diarske']
Run Code Online (Sandbox Code Playgroud)
假设您的测试文档在test.txt
,标签是西班牙语es
和葡萄牙语pt
:
Por ello, ha insistido en que Europa tiene que darle un toque de atención porque Portugal esta incumpliendo la directiva del establecimiento del peaje
Estima-se que o mercado homossexual só na Cidade do México movimente cerca de oito mil milhões de dólares, aproximadamente seis mil milhões de euros
Run Code Online (Sandbox Code Playgroud)
现在,您可以使用经过培训的分类器标记测试文档:
import codecs, re, time
from itertools import chain
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
trainfile = 'train.txt'
testfile = 'test.txt'
# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']
# Training NB
mnb = MultinomialNB()
mnb.fit(trainset, tags)
# Tagging the documents
codecs.open(testfile,'r','utf8')
testset = word_vectorizer.transform(codecs.open(testfile,'r','utf8'))
results = mnb.predict(testset)
print results
Run Code Online (Sandbox Code Playgroud)
[OUT]:
['es' 'pt']
Run Code Online (Sandbox Code Playgroud)
有关文本分类的更多信息,您可能会发现这个与NLTK相关的问题/答案很有用,请参阅针对情绪分析的nltk NaiveBayesClassifier培训
要使用HashingVectorizer,您需要注意它生成的负向量值和MultinomialNaiveBayes分类器不执行负值,因此您必须使用另一个分类器,如下所示:
import codecs, re, time
from itertools import chain
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import Perceptron
trainfile = 'train.txt'
testfile = 'test.txt'
# Vectorizing data.
train = []
word_vectorizer = HashingVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']
# Training Perceptron
pct = Perceptron(n_iter=100)
pct.fit(trainset, tags)
# Tagging the documents
codecs.open(testfile,'r','utf8')
testset = word_vectorizer.transform(codecs.open(testfile,'r','utf8'))
results = pct.predict(testset)
print results
Run Code Online (Sandbox Code Playgroud)
[OUT]:
['es' 'es']
Run Code Online (Sandbox Code Playgroud)
但请注意,在这个小例子中,感知器的结果更糟.不同的分类器适合不同的任务,不同的特征适合不同的矢量,不同的分类器也接受不同的矢量.
没有完美的模型,无论好坏
归档时间: |
|
查看次数: |
3801 次 |
最近记录: |