如何将句子加载到Python gensim中?

joh*_*ual 13 python nlp gensim

我试图在Python中使用自然语言处理库中的word2vec模块gensim.

文档说要初始化模型:

from gensim.models import word2vec
model = Word2Vec(sentences, size=100, window=5, min_count=5, workers=4)
Run Code Online (Sandbox Code Playgroud)

gensim输入句子的格式是什么?我有原始文本

"the quick brown fox jumps over the lazy dogs"
"Then a cop quizzed Mick Jagger's ex-wives briefly."
etc.
Run Code Online (Sandbox Code Playgroud)

我需要进行哪些额外的处理word2fec


更新:这是我尝试过的.当它加载句子时,我什么也得不到.

>>> sentences = ['the quick brown fox jumps over the lazy dogs',
             "Then a cop quizzed Mick Jagger's ex-wives briefly."]
>>> x = word2vec.Word2Vec()
>>> x.build_vocab([s.encode('utf-8').split( ) for s in sentences])
>>> x.vocab
{}
Run Code Online (Sandbox Code Playgroud)

aIK*_*Kid 11

utf-8句子列表.您还可以从磁盘流式传输数据.

确保它utf-8,并将其拆分:

sentences = [ "the quick brown fox jumps over the lazy dogs",
"Then a cop quizzed Mick Jagger's ex-wives briefly." ]
word2vec.Word2Vec([s.encode('utf-8').split() for s in sentences], size=100, window=5, min_count=5, workers=4)
Run Code Online (Sandbox Code Playgroud)

  • 启用日志记录并观察其内容.你的答案就在于此.剧透:'min_count = 5`. (7认同)
  • @alKid很好的答案,但它是一个序列(可迭代的)句子=不一定是一个列表.当`sentence`大于RAM,即从磁盘流式传输时,这会产生很大的差异. (2认同)

ngu*_*b05 5

就像alKid指出的那样,做到这一点utf-8

谈论您可能需要担心的另外两件事。

  1. 输入太大,您正在从文件加载它。
  2. 从句子中删除停用词。

您可以执行以下操作,而不是将大列表加载到内存中:

import nltk, gensim
class FileToSent(object):    
    def __init__(self, filename):
        self.filename = filename
        self.stop = set(nltk.corpus.stopwords.words('english'))

    def __iter__(self):
        for line in open(self.filename, 'r'):
           ll = [i for i in unicode(line, 'utf-8').lower().split() if i not in self.stop]
           yield ll
Run Code Online (Sandbox Code Playgroud)

进而,

sentences = FileToSent('sentence_file.txt')
model = gensim.models.Word2Vec(sentences=sentences, window=5, min_count=5, workers=4, hs=1)
Run Code Online (Sandbox Code Playgroud)