Bed*_*Kiq 9 printing debugging callback keras tensor
我用参数创建了自定义损失函数。
def w_categorical_crossentropy(weights):
def loss(y_true, y_pred):
print(weights)
print("----")
print(weights.shape)
final_mask = K.zeros_like(y_pred[:, 0])
y_pred_max = K.max(y_pred, axis=1)
y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
y_pred_max_mat = K.cast(K.equal(y_pred, y_pred_max), K.floatx())
return K.categorical_crossentropy(y_pred, y_true)
return loss
Run Code Online (Sandbox Code Playgroud)
现在,我需要控制权重参数值,但打印功能无法正常工作。有没有办法打印权重值?
我有时所做的(当然不是最好的解决方案,也不总是可能的)只是用 np 替换 K 后端并用一些简单的数据测试它。这是一个例子
原始 Keras 函数:
def loss(y_true, y_pred):
means = K.reshape(y_pred[:, 0], (-1, 1))
stds = K.reshape(y_pred[:, 1], (-1, 1))
var = K.square(stds)
denom = K.sqrt(2 * np.pi * var)
prob_num = - K.square(y_true - means) / (2 * var)
prob = prob_num - denom
r = K.exp(prob - old_prediction)
return -K.mean(K.minimum(r * advantage, K.clip(r, min_value=1 - self.LOSS_CLIPPING, max_value=1 + self.LOSS_CLIPPING) * advantage))
Run Code Online (Sandbox Code Playgroud)
测试功能:
def loss(y_true, y_pred):
means = np.reshape(y_pred[:, 0], (-1, 1))
stds = np.reshape(y_pred[:, 1], (-1, 1))
var = np.square(stds)
print(var.shape)
denom = np.sqrt(2 * np.pi * var)
print(denom.shape)
prob_num = - np.square(y_true - means) / (2 * var)
prob = prob_num - denom
r = np.exp(prob - old_prediction)
print(r.shape)
cliped = np.minimum(r * advantage, np.clip(r, a_min=1 - LOSS_CLIPPING, a_max=1 + LOSS_CLIPPING) * advantage)
print(cliped.shape)
return -np.mean(cliped)
Run Code Online (Sandbox Code Playgroud)
测试它:
LOSS_CLIPPING = 0.2
y_pred = np.array([[2,1], [1, 1], [5, 1]])
y_true = np.array([[1], [3], [2]])
old_prediction = np.array([[-2], [-5], [-6]])
advantage = np.array([[ 0.51467506],[-0.64960159],[-0.53304715]])
loss(y_true, y_pred)
Run Code Online (Sandbox Code Playgroud)
以上运行后,结果:
(3, 1)
(3, 1)
(3, 1)
(3, 1)
0.43409555193679816
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3138 次 |
| 最近记录: |