Ras*_*ngh 8 gensim text-classification word2vec doc2vec
请帮助我了解如何之差TaggedDocument和LabeledSentence的gensim作品.我的最终目标是使用Doc2Vec模型和任何分类器进行文本分类.我关注这个博客!
class MyLabeledSentences(object):
def __init__(self, dirname, dataDct={}, sentList=[]):
self.dirname = dirname
self.dataDct = {}
self.sentList = []
def ToArray(self):
for fname in os.listdir(self.dirname):
with open(os.path.join(self.dirname, fname)) as fin:
for item_no, sentence in enumerate(fin):
self.sentList.append(LabeledSentence([w for w in sentence.lower().split() if w in stopwords.words('english')], [fname.split('.')[0].strip() + '_%s' % item_no]))
return sentList
class MyTaggedDocument(object):
def __init__(self, dirname, dataDct={}, sentList=[]):
self.dirname = dirname
self.dataDct = {}
self.sentList = []
def ToArray(self):
for fname in os.listdir(self.dirname):
with open(os.path.join(self.dirname, fname)) as fin:
for item_no, sentence in enumerate(fin):
self.sentList.append(TaggedDocument([w for w in sentence.lower().split() if w in stopwords.words('english')], [fname.split('.')[0].strip() + '_%s' % item_no]))
return sentList
sentences = MyLabeledSentences(some_dir_name)
model_l = Doc2Vec(min_count=1, window=10, size=300, sample=1e-4, negative=5, workers=7)
sentences_l = sentences.ToArray()
model_l.build_vocab(sentences_l )
for epoch in range(15): #
random.shuffle(sentences_l )
model.train(sentences_l )
model.alpha -= 0.002 # decrease the learning rate
model.min_alpha = model_l.alpha
sentences = MyTaggedDocument(some_dir_name)
model_t = Doc2Vec(min_count=1, window=10, size=300, sample=1e-4, negative=5, workers=7)
sentences_t = sentences.ToArray()
model_l.build_vocab(sentences_t)
for epoch in range(15): #
random.shuffle(sentences_t)
model.train(sentences_t)
model.alpha -= 0.002 # decrease the learning rate
model.min_alpha = model_l.alpha
Run Code Online (Sandbox Code Playgroud)
我的问题model_l.docvecs['some_word']是一样的model_t.docvecs['some_word']吗?您能否为我提供良好来源的网络链接,以掌握如何TaggedDocument或LabeledSentence有效.
LabeledSentence是一个较旧的,已弃用的名称,用于相同的简单对象类型,以封装现在调用的文本示例TaggedDocument.有任何对象words和tags属性,每个属性列表,会做.(words总是一个字符串列表; tags可以是整数和字符串的混合,但在常见且最有效的情况下,只是一个带有单个id整数的列表,从0开始.)
model_l并且model_t使用相同的参数训练相同的数据,只使用不同的对象名称,以达到相同的目的.但是他们将为单个word-tokens(model['some_word'])或document-tags(model.docvecs['somefilename_NN'])返回的向量可能会有所不同 - 在Word2Vec/Doc2Vec初始化和训练采样中存在随机性,并通过多线程训练中的排序抖动引入.
| 归档时间: |
|
| 查看次数: |
3108 次 |
| 最近记录: |