InvalidArgumentError:Keras R 中的索引 [127,7] = 43 不在 [0, 43) 中

Hen*_*ski 3 indexing r tokenize keras

该问题与:InvalidArgumentError(回溯见上文):indices[1] = 10 is not in [0, 10) 我需要它用于 R,因此是上面链接中给出的另一种解决方案。

maxlen <- 40
chars <- c("'",  "-",  " ",  "!",  "\"", "(",  ")",  ",",  ".",  ":",  ";",  "?",  "[",  "]",  "_",  "=",  "0", "a",  "b",  "c",  "d",  "e", "f",  "g",  "h",  "i",  "j",  "k",  "l",  "m",  "n",  "o",  "p",  "q",  "r",  "s",  "t",  "u",  "v",  "w",  "x",  "y",  "z")



tokenizer <- text_tokenizer(char_level = T, filters = NULL)

tokenizer %>% fit_text_tokenizer(chars)
unlist(tokenizer$word_index)
Run Code Online (Sandbox Code Playgroud)

输出是:

 '  -     !  "  (  )  ,  .  :  ;  ?  [  ]  _  =  0  a  b  c  d  e  f  g  h  i  j  k  l  m  n  o  p  q  r  s  t  u  v  w  x  y  z 
 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 
Run Code Online (Sandbox Code Playgroud)

如何更改索引,使其在 text_tokenizer 中从 0 开始而不是从 1 开始?

运行 fit() 后得到的错误如下:

InvalidArgumentError: indices[127,7] = 43 is not in [0, 43)
     [[Node: embedding_3/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@training_1/RMSprop/Assign_1"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_3/embeddings/read, embedding_3/Cast, training_1/RMSprop/gradients/embedding_3/embedding_lookup_grad/concat/axis)]]
Run Code Online (Sandbox Code Playgroud)

但我相信改变索引会解决我的问题。

nur*_*ric 12

索引 0 通常保留用于填充,因此从 0 开始实际字符索引也不是一个明智的主意。相反,您应该Embedding按照文档的建议冒险进入该层并将输入大小加 1 :

input_dim: int > 0. 词汇表的大小,即最大整数索引 + 1。

在您的情况下,这将是 43 + 1 = 44。