相关疑难解决方法(0)

Keras Tokenizer num_words似乎不起作用

>>> t = Tokenizer(num_words=3)
>>> l = ["Hello, World! This is so&#$ fantastic!", "There is no other world like this one"]
>>> t.fit_on_texts(l)
>>> t.word_index
{'fantastic': 6, 'like': 10, 'no': 8, 'this': 2, 'is': 3, 'there': 7, 'one': 11, 'other': 9, 'so': 5, 'world': 1, 'hello': 4}
Run Code Online (Sandbox Code Playgroud)

我原本预计t.word_index会有前3个单词.我究竟做错了什么?

machine-learning tokenize neural-network deep-learning keras

11
推荐指数
3
解决办法
2561
查看次数