我想使用Bert 训练21类文本分类模型。但是我的训练数据很少,因此下载了一个类似的数据集,其中包含5个类的数据集,包含200万个样本。t并使用由bert提供的无条件预训练模型对下载的数据进行了微调。并获得了约98%的验证准确性。现在,我想将此模型用作我的小型自定义数据的预训练模型。但是shape mismatch with tensor output_bias from checkpoint reader
由于检查点模型有5个类,而我的自定义数据有21个类,因此出现错误。
NFO:tensorflow:Calling model_fn.
INFO:tensorflow:Running train on CPU
INFO:tensorflow:*** Features ***
INFO:tensorflow: name = input_ids, shape = (32, 128)
INFO:tensorflow: name = input_mask, shape = (32, 128)
INFO:tensorflow: name = is_real_example, shape = (32,)
INFO:tensorflow: name = label_ids, shape = (32, 21)
INFO:tensorflow: name = segment_ids, shape = (32, 128)
Tensor("IteratorGetNext:3", shape=(32, 21), dtype=int32)
WARNING:tensorflow:From /home/user/Spine_NLP/bert/modeling.py:358: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be …
Run Code Online (Sandbox Code Playgroud)nlp text-classification deep-learning tensorflow bert-language-model
我试图通过切片numpy数组将一个图像复制到另一个图像,但我在imshow 中获得完整的黑色输出,如果使用dtype=int
其他方式,则如下图所示。虽然两个图像中的像素值相同。这是示例代码:
import sys
import cv2
import numpy as np
def main():
img = cv2.imread('ele.jpg', 1)
h, w, c = img.shape
img_copy = np.empty((h, w, c), dtype=int)
img_copy[0:h, 0:w] = img
print (img[50:54, 50:54])
print (img_copy[50:54, 50:54].shape)
cv2.imshow('ele', img)
cv2.imshow('ele-copy', img_copy)
cv2.waitKey(0)
if __name__=='__main__':
main()
Run Code Online (Sandbox Code Playgroud)