doo*_*oms 18 python nlp scikit-learn
我正在尝试使用sklearn为NLP中的管道添加词干.
from nltk.stem.snowball import FrenchStemmer
stop = stopwords.words('french')
stemmer = FrenchStemmer()
class StemmedCountVectorizer(CountVectorizer):
def __init__(self, stemmer):
super(StemmedCountVectorizer, self).__init__()
self.stemmer = stemmer
def build_analyzer(self):
analyzer = super(StemmedCountVectorizer, self).build_analyzer()
return lambda doc:(self.stemmer.stem(w) for w in analyzer(doc))
stem_vectorizer = StemmedCountVectorizer(stemmer)
text_clf = Pipeline([('vect', stem_vectorizer), ('tfidf', TfidfTransformer()), ('clf', SVC(kernel='linear', C=1)) ])
Run Code Online (Sandbox Code Playgroud)
当使用此管道与sklearn的CountVectorizer时,它可以正常工作.如果我手动创建这样的功能,它也可以.
vectorizer = StemmedCountVectorizer(stemmer)
vectorizer.fit_transform(X)
tfidf_transformer = TfidfTransformer()
X_tfidf = tfidf_transformer.fit_transform(X_counts)
Run Code Online (Sandbox Code Playgroud)
编辑:
如果我在我的IPython笔记本上尝试这个管道,它会显示[*]并且没有任何反应.当我查看我的终端时,它会出现此错误:
Process PoolWorker-12:
Traceback (most recent call last):
File "C:\Anaconda2\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\Anaconda2\lib\multiprocessing\process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "C:\Anaconda2\lib\multiprocessing\pool.py", line 102, in worker
task = get()
File "C:\Anaconda2\lib\site-packages\sklearn\externals\joblib\pool.py", line 360, in get
return recv()
AttributeError: 'module' object has no attribute 'StemmedCountVectorizer'
Run Code Online (Sandbox Code Playgroud)
例
这是完整的例子
from sklearn.pipeline import Pipeline
from sklearn import grid_search
from sklearn.svm import SVC
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from nltk.stem.snowball import FrenchStemmer
stemmer = FrenchStemmer()
analyzer = CountVectorizer().build_analyzer()
def stemming(doc):
return (stemmer.stem(w) for w in analyzer(doc))
X = ['le chat est beau', 'le ciel est nuageux', 'les gens sont gentils', 'Paris est magique', 'Marseille est tragique', 'JCVD est fou']
Y = [1,0,1,1,0,0]
text_clf = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SVC())])
parameters = { 'vect__analyzer': ['word', stemming]}
gs_clf = grid_search.GridSearchCV(text_clf, parameters, n_jobs=-1)
gs_clf.fit(X, Y)
Run Code Online (Sandbox Code Playgroud)
如果你从参数中删除干扰它可以工作,否则它不起作用.
更新:
问题似乎出现在并行化过程中,因为当删除n_jobs = -1时,问题就会消失.
joe*_*eln 26
您可以将callable传递analyzer给CountVectorizer构造函数以提供自定义分析器.这似乎对我有用.
from sklearn.feature_extraction.text import CountVectorizer
from nltk.stem.snowball import FrenchStemmer
stemmer = FrenchStemmer()
analyzer = CountVectorizer().build_analyzer()
def stemmed_words(doc):
return (stemmer.stem(w) for w in analyzer(doc))
stem_vectorizer = CountVectorizer(analyzer=stemmed_words)
print(stem_vectorizer.fit_transform(['Tu marches dans la rue']))
print(stem_vectorizer.get_feature_names())
Run Code Online (Sandbox Code Playgroud)
打印出来:
(0, 4) 1
(0, 2) 1
(0, 0) 1
(0, 1) 1
(0, 3) 1
[u'dan', u'la', u'march', u'ru', u'tu']
Run Code Online (Sandbox Code Playgroud)
Par*_*pta 15
我知道我发布答案的时间已经很晚了.但在这里,如果有人仍然需要帮助.
以下是通过覆盖添加语言提取器来计算矢量化器的最简洁方法 build_analyser()
from sklearn.feature_extraction.text import CountVectorizer
import nltk.stem
french_stemmer = nltk.stem.SnowballStemmer('french')
class StemmedCountVectorizer(CountVectorizer):
def build_analyzer(self):
analyzer = super(StemmedCountVectorizer, self).build_analyzer()
return lambda doc: ([french_stemmer.stem(w) for w in analyzer(doc)])
vectorizer_s = StemmedCountVectorizer(min_df=3, analyzer="word", stop_words='french')
Run Code Online (Sandbox Code Playgroud)
您可以在对象上自由调用CountVectorizer类fit和transform函数vectorizer_s
你可以试试:
def build_analyzer(self):
analyzer = super(CountVectorizer, self).build_analyzer()
return lambda doc:(stemmer.stem(w) for w in analyzer(doc))
Run Code Online (Sandbox Code Playgroud)
并删除该__init__方法。
| 归档时间: |
|
| 查看次数: |
17276 次 |
| 最近记录: |