如何手工计算分类交叉熵?

der*_*eks 2 python artificial-intelligence tensorflow

当我手动计算二元交叉熵时,我应用 sigmoid 来获得概率,然后使用交叉熵公式并表示结果:

logits = tf.constant([-1, -1, 0, 1, 2.])
labels = tf.constant([0, 0, 1, 1, 1.])

probs = tf.nn.sigmoid(logits)
loss = labels * (-tf.math.log(probs)) + (1 - labels) * (-tf.math.log(1 - probs))
print(tf.reduce_mean(loss).numpy()) # 0.35197204

cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
loss = cross_entropy(labels, logits)
print(loss.numpy()) # 0.35197204
Run Code Online (Sandbox Code Playgroud)

如何计算范畴交叉熵的时候logitslabels有不同的大小?

logits = tf.constant([-1, -1, 0, 1, 2.])
labels = tf.constant([0, 0, 1, 1, 1.])

probs = tf.nn.sigmoid(logits)
loss = labels * (-tf.math.log(probs)) + (1 - labels) * (-tf.math.log(1 - probs))
print(tf.reduce_mean(loss).numpy()) # 0.35197204

cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
loss = cross_entropy(labels, logits)
print(loss.numpy()) # 0.35197204
Run Code Online (Sandbox Code Playgroud)

我的意思是如何[2.0077195 0.00928135 0.6800677 ]手动获得相同的结果 ( )?

@OverLordGoldDragon 答案是正确的。在TF 2.0它看起来像这样:

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
loss = loss_object(labels, logits)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')

one_hot_labels = tf.one_hot(labels, 10)

preds = tf.nn.softmax(logits)
preds /= tf.math.reduce_sum(preds, axis=-1, keepdims=True)
loss = tf.math.reduce_sum(tf.math.multiply(one_hot_labels, -tf.math.log(preds)), axis=-1)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')
# [2.0077195  0.00928135 0.6800677 ]
# 2.697068691253662
# [2.0077198  0.00928142 0.6800677 ]
# 2.697068929672241
Run Code Online (Sandbox Code Playgroud)

对于语言模型:

vocab_size = 9
seq_len = 6
batch_size = 2

labels = tf.reshape(tf.range(batch_size*seq_len), (batch_size,seq_len)) # (2, 6)
logits = tf.random.normal((batch_size,seq_len,vocab_size)) # (2, 6, 9)

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
loss = loss_object(labels, logits)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')

one_hot_labels = tf.one_hot(labels, vocab_size)

preds = tf.nn.softmax(logits)
preds /= tf.math.reduce_sum(preds, axis=-1, keepdims=True)
loss = tf.math.reduce_sum(tf.math.multiply(one_hot_labels, -tf.math.log(preds)), axis=-1)
print(f'{loss.numpy()}\n{tf.math.reduce_sum(loss).numpy()}')
# [[1.341706  3.2518263 2.6482694 3.039099  1.5835983 4.3498387]
#  [2.67237   3.3978183 2.8657475       nan       nan       nan]]
# nan
# [[1.341706  3.2518263 2.6482694 3.039099  1.5835984 4.3498387]
#  [2.67237   3.3978183 2.8657475 0.        0.        0.       ]]
# 25.1502742767334
Run Code Online (Sandbox Code Playgroud)

Ove*_*gon 8

SparseCategoricalCrossentropyCategoricalCrossentropy采用整数标签而不是one-hot。来自源代码的示例,以下两个是等效的:

scce = tf.keras.losses.SparseCategoricalCrossentropy()
cce = tf.keras.losses.CategoricalCrossentropy()

labels_scce = K.variable([[0, 1, 2]]) 
labels_cce  = K.variable([[1,    0,  0], [0,    1,  0], [0,   0,   1]])
preds       = K.variable([[.90,.05,.05], [.50,.89,.60], [.05,.01,.94]])

loss_cce  = cce(labels_cce,   preds, from_logits=False)
loss_scce = scce(labels_scce, preds, from_logits=False)
Run Code Online (Sandbox Code Playgroud)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sess.run([loss_cce, loss_scce])

print(K.get_value(loss_cce))
print(K.get_value(loss_scce))
# [0.10536055  0.8046684  0.0618754]
# [0.10536055  0.8046684  0.0618754]
Run Code Online (Sandbox Code Playgroud)

至于如何“手动”完成,我们可以参考Numpy 后端

np_labels = K.get_value(labels_cce)
np_preds  = K.get_value(preds)

losses = []
for label, pred in zip(np_labels, np_preds):
    pred /= pred.sum(axis=-1, keepdims=True)
    losses.append(np.sum(label * -np.log(pred), axis=-1, keepdims=False))
print(losses)
# [0.10536055  0.8046684  0.0618754]
Run Code Online (Sandbox Code Playgroud)
  • from_logits = True:preds是传入之前的模型输出softmax(所以我们将其传入 softmax)
  • from_logits = False:preds是传入后的模型输出softmax(所以我们跳过这一步)

总而言之,手动计算:

  1. 将整数标签转换为单热标签
  2. 如果 preds 是softmax之前的模型输出,我们计算它们的 softmax
  3. pred /= ... 在计算日志之前规范化预测;这样,高概率。对零标签的预测会惩罚对单标签的正确预测。如果from_logits = False,则跳过此步骤,因为softmax进行归一化。看到这个片段进一步阅读
  4. 对于每个观察/样本,仅计算元素方面的负值log(基数 e),其中 label==1
  5. 取所有观测值的损失平均值

最后,分类交叉熵的数学公式是:

  • i迭代N观察
  • c迭代C
  • 1指示函数- 在这里,就像二元交叉熵一样,除了对长度C向量进行操作
  • p_model [y_i \in C_c]-i属于类的预测观察概率c