Jam*_*hon 7 python random nltk
我在Python下使用NLTK遇到了麻烦,特别是.generate()方法.
生成(自我,长度= 100)
打印使用trigram语言模型生成的随机文本.
参数:
Run Code Online (Sandbox Code Playgroud)* length (int) - The length of text to generate (default=100)
这是我正在尝试的简化版本.
import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
Run Code Online (Sandbox Code Playgroud)
这将始终生成
Building ngram index...
The quick brown
None
Run Code Online (Sandbox Code Playgroud)
而不是建立一个随机短语.
这是我的输出
print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
Run Code Online (Sandbox Code Playgroud)
再次开始使用相同的文本,但随后改变它.我也试过使用Orwell 1984年的第一章.再次总是以前3个令牌开始(在这种情况下其中一个是空格)然后继续随机生成文本.
我在这做错了什么?
Lak*_*sad 12
要生成随机文本,U需要使用Markov链
代码来做到这一点:从这里开始
import random
class Markov(object):
def __init__(self, open_file):
self.cache = {}
self.open_file = open_file
self.words = self.file_to_words()
self.word_size = len(self.words)
self.database()
def file_to_words(self):
self.open_file.seek(0)
data = self.open_file.read()
words = data.split()
return words
def triples(self):
""" Generates triples from the given data string. So if our string were
"What a lovely day", we'd generate (What, a, lovely) and then
(a, lovely, day).
"""
if len(self.words) < 3:
return
for i in range(len(self.words) - 2):
yield (self.words[i], self.words[i+1], self.words[i+2])
def database(self):
for w1, w2, w3 in self.triples():
key = (w1, w2)
if key in self.cache:
self.cache[key].append(w3)
else:
self.cache[key] = [w3]
def generate_markov_text(self, size=25):
seed = random.randint(0, self.word_size-3)
seed_word, next_word = self.words[seed], self.words[seed+1]
w1, w2 = seed_word, next_word
gen_words = []
for i in xrange(size):
gen_words.append(w1)
w1, w2 = w2, random.choice(self.cache[(w1, w2)])
gen_words.append(w2)
return ' '.join(gen_words)
Run Code Online (Sandbox Code Playgroud)
你应该用多个序列"训练"Markov模型,这样你就可以准确地采样起始状态概率(在Markov-speak中称为"pi").如果您使用单个序列,那么您将始终以相同的状态开始.
在Orwell 1984的情况下,你首先要使用句子标记(NLTK非常擅长),然后是单词标记化(产生标记列表,而不仅仅是单个标记列表),然后分别输入每个句子马尔可夫模型.这将允许它正确地模拟序列启动,而不是卡在一个单一的方式来启动每个序列.
| 归档时间: |
|
| 查看次数: |
18594 次 |
| 最近记录: |