Ada*_*ex3 5 python machine-learning deep-learning tensorflow tensorflow2.0
我对 Tensorflow 非常陌生,一直在通过此链接搞乱一个简单的聊天机器人构建项目。
有很多警告说 Tensorflow 2.0 中的内容将被弃用,我应该升级,所以我就这么做了。然后,我使用自动Tensorflow 代码升级程序将所有必要的文件更新到 2.0。这其中存在一些错误。
处理 model.py 文件时,它返回以下警告:
133:20: WARNING: tf.nn.sampled_softmax_loss requires manual check. `partition_strategy` has been removed from tf.nn.sampled_softmax_loss. The 'div' strategy will be used by default.
148:31: WARNING: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib.rnn. (Manual edit required) tf.contrib.rnn.* has been deprecated, and widely used cells/functions will be moved to tensorflow/addons repository. Please check it there and file Github issues if necessary.
148:31: ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
171:33: ERROR: Using member tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
197:27: ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
Run Code Online (Sandbox Code Playgroud)
我遇到的主要问题是使用现在不存在的模块中的代码contrib。如何调整以下三个代码块以便它们在 Tensorflow 2.0 中工作?
# Define the network
# Here we use an embedding model, it takes integer as input and convert them into word vector for
# better word representation
decoderOutputs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
self.encoderInputs, # List<[batch=?, inputDim=1]>, list of size args.maxLength
self.decoderInputs, # For training, we force the correct output (feed_previous=False)
encoDecoCell,
self.textData.getVocabularySize(),
self.textData.getVocabularySize(), # Both encoder and decoder have the same number of class
embedding_size=self.args.embeddingSize, # Dimension of each word
output_projection=outputProjection.getWeights() if outputProjection else None,
feed_previous=bool(self.args.test) # When we test (self.args.test), we use previous output as next input (feed_previous)
)
Run Code Online (Sandbox Code Playgroud)
# Finally, we define the loss function
self.lossFct = tf.contrib.legacy_seq2seq.sequence_loss(
decoderOutputs,
self.decoderTargets,
self.decoderWeights,
self.textData.getVocabularySize(),
softmax_loss_function= sampledSoftmax if outputProjection else None # If None, use default SoftMax
)
Run Code Online (Sandbox Code Playgroud)
encoDecoCell = tf.contrib.rnn.DropoutWrapper(
encoDecoCell,
input_keep_prob=1.0,
output_keep_prob=self.args.dropout
)
Run Code Online (Sandbox Code Playgroud)
小智 3
tf.contrib基本上是 TensorFlow 社区做出的贡献,其工作原理如下。
现在,在 Tensorflow 2 中,Tensorflow 删除了 contrib,现在 contrib 中的每个项目对其未来都有以下三个选择之一:移至核心;移动到单独的存储库;或删除。
\n\n您可以从此链接查看属于哪个类别的所有项目列表。
\n\n对于替代解决方案,将代码从 Tensorflow 1 迁移到 Tensorflow 2 不会自动发生,您必须手动更改。
\n您可以遵循以下替代方案。
tf.contrib.rnn.DropoutWrapper你可以将其更改为tf.compat.v1.nn.rnn_cell.DropoutWrapper
对于序列到序列,您可以使用TensorFlow Addons.
TensorFlow Addons 项目包含许多序列到序列\n工具,可让您轻松构建可用于生产的编码器\xe2\x80\x93解码器。
\n\n例如,您可以使用如下所示的内容。
\n\nimport tensorflow_addons as tfa\nencoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\ndecoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\nsequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\nembeddings = keras.layers.Embedding(vocab_size, embed_size)\nencoder_embeddings = embeddings(encoder_inputs)\ndecoder_embeddings = embeddings(decoder_inputs)\nencoder = keras.layers.LSTM(512, return_state=True)\nencoder_outputs, state_h, state_c = encoder(encoder_embeddings)encoder_state = [state_h, state_c]\nsampler = tfa.seq2seq.sampler.TrainingSampler()\ndecoder_cell = keras.layers.LSTMCell(512)\noutput_layer = keras.layers.Dense(vocab_size)\ndecoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,\noutput_layer=output_layer)\nfinal_outputs, final_state, final_sequence_lengths = decoder(\ndecoder_embeddings, initial_state=encoder_state,\nsequence_length=sequence_lengths)\nY_proba = tf.nn.softmax(final_outputs.rnn_output)\nmodel = keras.Model(inputs=[encoder_inputs, decoder_inputs,\nsequence_lengths],\noutputs=[Y_proba])\nRun Code Online (Sandbox Code Playgroud)\n\n同样,您需要将使用 tf.contrib 的所有方法更改为兼容的方法。
\n\n我希望这能回答你的问题。
\n