Adr*_*ien 15 python scikit-learn
我正在尝试使用scikit-learn来计算一个简单的单词频率CountVectorizer.
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
texts=["dog cat fish","dog cat cat","fish bird","bird"]
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)
print cv.vocabulary_
{u'bird': 0, u'cat': 1, u'dog': 2, u'fish': 3}
Run Code Online (Sandbox Code Playgroud)
我期待它回归{u'bird': 2, u'cat': 3, u'dog': 2, u'fish': 2}.
Ffi*_*ydd 33
cv.vocabulary_在这个例子中是一个字典,其中键是你找到的单词(特征),值是索引,这就是它们的原因0, 1, 2, 3.这只是运气不好,它看起来与你的数量相似:)
您需要使用该cv_fit对象来获取计数
from sklearn.feature_extraction.text import CountVectorizer
texts=["dog cat fish","dog cat cat","fish bird", 'bird']
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)
print(cv.get_feature_names())
print(cv_fit.toarray())
#['bird', 'cat', 'dog', 'fish']
#[[0 1 1 1]
# [0 2 1 0]
# [1 0 0 1]
# [1 0 0 0]]
Run Code Online (Sandbox Code Playgroud)
数组中的每一行都是原始文档(字符串)之一,每列是一个要素(单词),元素是该特定单词和文档的计数.您可以看到,如果您对每列求和,您将得到正确的数字
print(cv_fit.toarray().sum(axis=0))
#[2 3 2 2]
Run Code Online (Sandbox Code Playgroud)
老实说,我建议使用collections.CounterNLTK或其他东西,除非你有一些特定的理由使用scikit-learn,因为它会更简单.
小智 18
We are going to use the zip method to make dict from a list of words and list of their counts
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
texts=["dog cat fish","dog cat cat","fish bird","bird"]
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)
word_list = cv.get_feature_names();
count_list = cv_fit.toarray().sum(axis=0)
Run Code Online (Sandbox Code Playgroud)
print word_list
['bird', 'cat', 'dog', 'fish']
print count_list
[2 3 2 2]
print dict(zip(word_list,count_list))
{'fish': 2, 'dog': 2, 'bird': 2, 'cat': 3}
cv_fit.toarray().sum(axis=0) 肯定会给出正确的结果,但是在稀疏矩阵上执行求和然后将其转换为数组要快得多:
np.asarray(cv_fit.sum(axis=0))
Run Code Online (Sandbox Code Playgroud)
结合其他人的观点和我自己的一些观点:)这就是我为您提供的内容
from collections import Counter
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
text='''Note that if you use RegexpTokenizer option, you lose
natural language features special to word_tokenize
like splitting apart contractions. You can naively
split on the regex \w+ without any need for the NLTK.
'''
# tokenize
raw = ' '.join(word_tokenize(text.lower()))
tokenizer = RegexpTokenizer(r'[A-Za-z]{2,}')
words = tokenizer.tokenize(raw)
# remove stopwords
stop_words = set(stopwords.words('english'))
words = [word for word in words if word not in stop_words]
# count word frequency, sort and return just 20
counter = Counter()
counter.update(words)
most_common = counter.most_common(20)
most_common
Run Code Online (Sandbox Code Playgroud)
#输出(全部)
[('注释', 1),
('使用', 1),
('正则表达式标记器', 1),
('选项1),
('输', 1),
('自然', 1),
('语言', 1),
('特点', 1),
('特殊', 1),
('词', 1),
('标记化', 1),
('喜欢', 1),
('分裂', 1),
('分开', 1),
('收缩', 1),
('天真地', 1),
('分割', 1),
('正则表达式', 1),
('没有', 1),
('需要', 1)]
在效率方面可以做得比这更好,但如果你不太担心的话,这段代码是最好的。