Tok*_*rby 21 python deep-learning keras tensorflow keras-layer
我使用的是Windows 10,Python 3.5和tensorflow 1.1.0.我有以下脚本:
import tensorflow as tf
import tensorflow.contrib.keras.api.keras.backend as K
from tensorflow.contrib.keras.api.keras.layers import Dense
tf.reset_default_graph()
init = tf.global_variables_initializer()
sess = tf.Session()
K.set_session(sess) # Keras will use this sesssion to initialize all variables
input_x = tf.placeholder(tf.float32, [None, 10], name='input_x')
dense1 = Dense(10, activation='relu')(input_x)
sess.run(init)
dense1.get_weights()
Run Code Online (Sandbox Code Playgroud)
我收到错误: AttributeError: 'Tensor' object has no attribute 'weights'
Onn*_*man 45
如果您想获得所有图层的权重和偏差,您可以简单地使用:
for layer in model.layers: print(layer.get_config(), layer.get_weights())
Run Code Online (Sandbox Code Playgroud)
这将打印所有相关信息.
如果要将权重直接返回为numpy数组,可以使用:
first_layer_weights = model.layers[0].get_weights()[0]
first_layer_biases = model.layers[0].get_weights()[1]
second_layer_weights = model.layers[1].get_weights()[0]
second_layer_biases = model.layers[1].get_weights()[1]
Run Code Online (Sandbox Code Playgroud)
等等
小智 19
如果你写:
dense1 = Dense(10, activation='relu')(input_x)
然后dense1
不是图层,它是图层的输出.这层是Dense(10, activation='relu')
所以看来你的意思是:
dense1 = Dense(10, activation='relu')
y = dense1(input_x)
Run Code Online (Sandbox Code Playgroud)
这是一个完整的片段:
import tensorflow as tf
from tensorflow.contrib.keras import layers
input_x = tf.placeholder(tf.float32, [None, 10], name='input_x')
dense1 = layers.Dense(10, activation='relu')
y = dense1(input_x)
weights = dense1.get_weights()
Run Code Online (Sandbox Code Playgroud)
小智 18
如果您想查看层的权重和偏差如何随时间变化,您可以添加一个回调来记录每个训练时期的值。
例如,使用这样的模型,
import numpy as np
model = Sequential([Dense(16, input_shape=(train_inp_s.shape[1:])), Dense(12), Dense(6), Dense(1)])
Run Code Online (Sandbox Code Playgroud)
在拟合期间添加回调 **kwarg:
gw = GetWeights()
model.fit(X, y, validation_split=0.15, epochs=10, batch_size=100, callbacks=[gw])
Run Code Online (Sandbox Code Playgroud)
其中回调定义为
class GetWeights(Callback):
# Keras callback which collects values of weights and biases at each epoch
def __init__(self):
super(GetWeights, self).__init__()
self.weight_dict = {}
def on_epoch_end(self, epoch, logs=None):
# this function runs at the end of each epoch
# loop over each layer and get weights and biases
for layer_i in range(len(self.model.layers)):
w = self.model.layers[layer_i].get_weights()[0]
b = self.model.layers[layer_i].get_weights()[1]
print('Layer %s has weights of shape %s and biases of shape %s' %(
layer_i, np.shape(w), np.shape(b)))
# save all weights and biases inside a dictionary
if epoch == 0:
# create array to hold weights and biases
self.weight_dict['w_'+str(layer_i+1)] = w
self.weight_dict['b_'+str(layer_i+1)] = b
else:
# append new weights to previously-created weights array
self.weight_dict['w_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['w_'+str(layer_i+1)], w))
# append new weights to previously-created weights array
self.weight_dict['b_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['b_'+str(layer_i+1)], b))
Run Code Online (Sandbox Code Playgroud)
此回调将构建一个包含所有层权重和偏差的字典,并以层数标记,因此您可以看到它们在模型训练时随时间的变化。您会注意到每个权重和偏置数组的形状取决于模型层的形状。为模型中的每一层保存一个权重数组和一个偏置数组。第三个轴(深度)显示了它们随时间的演变。
在这里,我们使用了 10 个时期和一个具有 16、12、6 和 1 个神经元层的模型:
for key in gw.weight_dict:
print(str(key) + ' shape: %s' %str(np.shape(gw.weight_dict[key])))
w_1 shape: (5, 16, 10)
b_1 shape: (1, 16, 10)
w_2 shape: (16, 12, 10)
b_2 shape: (1, 12, 10)
w_3 shape: (12, 6, 10)
b_3 shape: (1, 6, 10)
w_4 shape: (6, 1, 10)
b_4 shape: (1, 1, 10)
Run Code Online (Sandbox Code Playgroud)
如果图层索引号令人困惑,您也可以使用图层名称
重量:
model.get_layer(<<layer_name>>).get_weights()[0]
Run Code Online (Sandbox Code Playgroud)
偏见:
model.get_layer(<<layer_name>>).get_weights()[1]
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
32553 次 |
最近记录: |