有没有办法在TensorFlow的seq2seq模型中可视化某些输入上的注意权重,如上面链接中的图(来自Bahdanau等,2014)?我已经找到了TensorFlow的github问题,但我无法找到如何在会话期间获取注意掩码.
seq2seq
deep-learning tensorflow attention-model sequence-to-sequence
attention-model ×1
deep-learning ×1
sequence-to-sequence ×1
tensorflow ×1