如何从列表中创建张量流记录?
从这里的文档似乎可能.还有这个例子,他们使用.tostring()
from numpy 将numpy数组转换为字节数组.但是,当我尝试传入时:
labels = np.asarray([[1,2,3],[4,5,6]])
...
example = tf.train.Example(features=tf.train.Features(feature={
'height': _int64_feature(rows),
'width': _int64_feature(cols),
'depth': _int64_feature(depth),
'label': _int64_feature(labels[index]),
'image_raw': _bytes_feature(image_raw)}))
writer.write(example.SerializeToString())
Run Code Online (Sandbox Code Playgroud)
我收到错误:
TypeError: array([1, 2, 3]) has type type 'numpy.ndarray', but expected one of: (type 'int', type 'long')
Run Code Online (Sandbox Code Playgroud)
这对我没有帮助我弄清楚如何将整数列表存储到tfrecord中.我试过翻阅文档.
我从一个保存的模型加载,我希望能够重置一个tensorflow优化器,如Adam Optimizer.理想情况如下:
sess.run([tf.initialize_variables(Adamopt)])
Run Code Online (Sandbox Code Playgroud)
要么
sess.run([Adamopt.reset])
Run Code Online (Sandbox Code Playgroud)
我试过寻找答案,但还没有找到任何办法.以下是我发现的不解决问题的方法:https: //github.com/tensorflow/tensorflow/issues/634
我基本上只想要一种方法来重置Adam Optimizer中的"slot"变量.
谢谢
我正在尝试使用 nn.Transformer 类训练 Transformer Seq2Seq 模型。我相信我的实现是错误的,因为当我训练它时,它似乎适应得太快了,并且在推理过程中它经常重复。这似乎是解码器中的掩码问题,当我移除目标掩码时,训练性能是相同的。这让我相信我做的目标屏蔽是错误的。这是我的模型代码:
class TransformerModel(nn.Module):
def __init__(self,
vocab_size, input_dim, heads, feedforward_dim, encoder_layers, decoder_layers,
sos_token, eos_token, pad_token, max_len=200, dropout=0.5,
device=(torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu"))):
super(TransformerModel, self).__init__()
self.target_mask = None
self.embedding = nn.Embedding(vocab_size, input_dim, padding_idx=pad_token)
self.pos_embedding = nn.Embedding(max_len, input_dim, padding_idx=pad_token)
self.transformer = nn.Transformer(
d_model=input_dim, nhead=heads, num_encoder_layers=encoder_layers,
num_decoder_layers=decoder_layers, dim_feedforward=feedforward_dim,
dropout=dropout)
self.out = nn.Sequential(
nn.Linear(input_dim, feedforward_dim),
nn.ReLU(),
nn.Linear(feedforward_dim, vocab_size))
self.device = device
self.max_len = max_len
self.sos_token = sos_token
self.eos_token = eos_token
# Initialize all weights to be uniformly distributed between -initrange …
Run Code Online (Sandbox Code Playgroud)