Keras RNN (R) 文本生成词级模型

Set*_*hel 5 r lstm keras recurrent-neural-network

我一直在研究字符级文本生成的示例:https://keras.rstudio.com/articles/examples/lstm_text_ Generation.html

我无法将此示例扩展到单词级模型。请参阅下面的代表

library(keras)
library(readr)
library(stringr)
library(purrr)
library(tokenizers)

# Parameters

maxlen <- 40

# Data Preparation

# Retrieve text
path <- get_file(
  'nietzsche.txt', 
  origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt'
  )

# Load, collapse, and tokenize text
text <- read_lines(path) %>%
  str_to_lower() %>%
  str_c(collapse = "\n") %>%
  tokenize_words( simplify = TRUE)

print(sprintf("corpus length: %d", length(text)))

words <- text %>%
  unique() %>%
  sort()

print(sprintf("total words: %d", length(words)))  
Run Code Online (Sandbox Code Playgroud)

这使:

[1] "corpus length: 101345"
[1] "total words: 10283"
Run Code Online (Sandbox Code Playgroud)

当我继续下一步时,我遇到了问题:

# Cut the text in semi-redundant sequences of maxlen characters
dataset <- map(
  seq(1, length(text) - maxlen - 1, by = 3), 
  ~list(sentece = text[.x:(.x + maxlen - 1)], next_char = text[.x + maxlen])
  )

dataset <- transpose(dataset)

# Vectorization
X <- array(0, dim = c(length(dataset$sentece), maxlen, length(words)))
y <- array(0, dim = c(length(dataset$sentece), length(words)))

for(i in 1:length(dataset$sentece)){

  X[i,,] <- sapply(words, function(x){
    as.integer(x == dataset$sentece[[i]])
  })

  y[i,] <- as.integer(words == dataset$next_char[[i]])

}


Error: cannot allocate vector of size 103.5 Gb
Run Code Online (Sandbox Code Playgroud)

现在,与字符示例相比,我的单词数量比词汇表中的字符多得多,这可能就是我遇到向量大小问题的原因,但是我将如何预处理单词级文本数据以适应一个rnn?这是通过嵌入层以某种方式完成的吗?我是否需要删除停用词/词干来减少词汇量?

编辑:我仍在寻找此问题的解决方案,但下面提供了一些额外的背景和想法: https: //github.com/rstudio/keras/issues/161

小智 0

我认为这篇文章可能对您有用,尽管它的方法与您的做法略有不同,具体来说:

即便如此,当我使用更大的数据集进行处理时,当我尝试一次处理大量项目时,我也会遇到内存问题,因此我必须将其分解为小块并以这种方式进行训练。