在scikit中保存并重复使用TfidfVectorizer

Jos*_*K J 12 python nlp pickle text-mining scikit-learn

我在scikit中使用TfidfVectorizer学习从文本数据创建矩阵.现在我需要保存此对象以便以后重用.我试图使用pickle,但它给出了以下错误.

loc=open('vectorizer.obj','w')
pickle.dump(self.vectorizer,loc)
*** TypeError: can't pickle instancemethod objects
Run Code Online (Sandbox Code Playgroud)

我尝试在sklearn.externals中使用joblib,这再次给出了类似的错误.有没有办法保存这个对象,以便我以后可以重用它?

这是我的完整对象:

class changeToMatrix(object):
def __init__(self,ngram_range=(1,1),tokenizer=StemTokenizer()):
    from sklearn.feature_extraction.text import TfidfVectorizer
    self.vectorizer = TfidfVectorizer(ngram_range=ngram_range,analyzer='word',lowercase=True,\
                                          token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=tokenizer)

def load_ref_text(self,text_file):
    textfile = open(text_file,'r')
    lines=textfile.readlines()
    textfile.close()
    lines = ' '.join(lines)
    sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
    sentences = [ sent_tokenizer.tokenize(lines.strip()) ]
    sentences1 = [item.strip().strip('.') for sublist in sentences for item in sublist]      
    chk2=pd.DataFrame(self.vectorizer.fit_transform(sentences1).toarray()) #vectorizer is transformed in this step 
    return sentences1,[chk2]

def get_processed_data(self,data_loc):
    ref_sentences,ref_dataframes=self.load_ref_text(data_loc)
    loc=open("indexedData/vectorizer.obj","w")
    pickle.dump(self.vectorizer,loc) #getting error here
    loc.close()
    return ref_sentences,ref_dataframes
Run Code Online (Sandbox Code Playgroud)

alv*_*vas 7

首先,最好将导入保留在代码的顶部,而不是在您的类中:

from sklearn.feature_extraction.text import TfidfVectorizer
class changeToMatrix(object):
  def __init__(self,ngram_range=(1,1),tokenizer=StemTokenizer()):
    ...
Run Code Online (Sandbox Code Playgroud)

接下来StemTokenizer似乎不是规范类.可能你已经从http://sahandsaba.com/visualizing-philosophers-and-scientists-by-the-words-they-used-with-d3js-and-python.html或其他地方得到了它,所以我们会假设它返回一个字符串列表.

class StemTokenizer(object):
    def __init__(self):
        self.ignore_set = {'footnote', 'nietzsche', 'plato', 'mr.'}

    def __call__(self, doc):
        words = []
        for word in word_tokenize(doc):
            word = word.lower()
            w = wn.morphy(word)
            if w and len(w) > 1 and w not in self.ignore_set:
                words.append(w)
        return words
Run Code Online (Sandbox Code Playgroud)

现在回答您的实际问题,您可能需要在转储pickle之前以字节模式打开文件,即:

>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> from nltk import word_tokenize
>>> import cPickle as pickle
>>> vectorizer = TfidfVectorizer(ngram_range=(0,2),analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=word_tokenize)
>>> vectorizer
TfidfVectorizer(analyzer='word', binary=False, decode_error=u'strict',
        dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content',
        lowercase=True, max_df=1.0, max_features=None, min_df=1,
        ngram_range=(0, 2), norm=u'l2', preprocessor=None, smooth_idf=True,
        stop_words=None, strip_accents='unicode', sublinear_tf=False,
        token_pattern='[a-zA-Z0-9]+',
        tokenizer=<function word_tokenize at 0x7f5ea68e88c0>, use_idf=True,
        vocabulary=None)
>>> with open('vectorizer.pk', 'wb') as fin:
...     pickle.dump(vectorizer, fin)
... 
>>> exit()
alvas@ubi:~$ ls -lah vectorizer.pk 
-rw-rw-r-- 1 alvas alvas 763 Jun 15 14:18 vectorizer.pk
Run Code Online (Sandbox Code Playgroud)

注意:使用withidiom进行i/o文件访问会在您离开with作用域后自动关闭文件.

关于这个问题SnowballStemmer(),注意这SnowballStemmer('english')是一个对象,而词干功能是SnowballStemmer('english').stem.

重要提示:

  • TfidfVectorizer的tokenizer参数需要获取一个字符串并返回一个字符串列表
  • 但是Snowball词干分析器不接受字符串作为输入并返回字符串列表.

所以你需要这样做:

>>> from nltk.stem import SnowballStemmer
>>> from nltk import word_tokenize
>>> stemmer = SnowballStemmer('english').stem
>>> def stem_tokenize(text):
...     return [stemmer(i) for i in word_tokenize(text)]
... 
>>> vectorizer = TfidfVectorizer(ngram_range=(0,2),analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=stem_tokenize)
>>> with open('vectorizer.pk', 'wb') as fin:
...     pickle.dump(vectorizer, fin)
...
>>> exit()
alvas@ubi:~$ ls -lah vectorizer.pk 
-rw-rw-r-- 1 alvas alvas 758 Jun 15 15:55 vectorizer.pk
Run Code Online (Sandbox Code Playgroud)


cot*_*ail 5

如果您看到此问答是为了研究腌制 Vectorizer 以节省磁盘空间,您可以使用joblibscikit-learn 附带的矢量化器compress=True,也可以使用内置gzip模块和pickle. 一个工作示例如下所示。对于我的用例来说,它会将文件压缩至少两倍。

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.datasets import fetch_20newsgroups
import joblib
import pickle
import gzip

data = fetch_20newsgroups().data
tvec = TfidfVectorizer()
tvec.fit(data)

# option #1
joblib.dump(tvec, 'vectorizer.pkl', compress=True)

# option #2
with gzip.open('vectorizer.pkl', 'wb') as f:
    pickle.dump(tvec, f)
Run Code Online (Sandbox Code Playgroud)