具有LSTM的用于Keras中的句子相似性的Siamese Network定期给出相同的结果

MiV*_*e93 6 python-3.x word2vec sentence-similarity lstm keras

我是Keras的新手,我正在努力解决在Keras使用NN的句子类似任务.我使用word2vec作为单词嵌入,然后使用Siamese Network来预测两个句子的相似程度.Siamese网络的基础网络是LSTM,为了合并两个基础网络,我使用具有余弦相似度量的Lambda层.作为数据集,我正在使用SICK数据集,它为每对句子提供一个分数,从1(不同)到5(非常相似).

我创建了网络并运行,但我有很多疑问:首先,我不确定我用LSTM提供句子的方式是否正常.我为每个单词采用word2vec嵌入,每个句子只创建一个数组,用零填充到seq_len,以获得相同的长度数组.然后我以这种方式重塑它: data_A = embedding_A.reshape((len(embedding_A), seq_len, feature_dim))

此外我不确定我的暹罗网络是否正确,因为不同对的预测很多,并且损失没有太大变化(在10个时期内从0.3300到0.2105,并且在100个时期内变化不大)时期).

有人可以帮我找到并理解我的错误吗?非常感谢(抱歉我的英语不好)

对我的代码感兴趣

def cosine_distance(vecs):
    #I'm not sure about this function too
    y_true, y_pred = vecs
    y_true = K.l2_normalize(y_true, axis=-1)
    y_pred = K.l2_normalize(y_pred, axis=-1)
    return K.mean(1 - K.sum((y_true * y_pred), axis=-1))

def cosine_dist_output_shape(shapes):
    shape1, shape2 = shapes
    print((shape1[0], 1))
    return (shape1[0], 1)

def contrastive_loss(y_true, y_pred):
    margin = 1
    return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))

def create_base_network(feature_dim,seq_len):

    model = Sequential()  
    model.add(LSTM(100, batch_input_shape=(1,seq_len,feature_dim),return_sequences=True))
    model.add(Dense(50, activation='relu'))    
    model.add(Dense(10, activation='relu'))
    return model


def siamese(feature_dim,seq_len, epochs, tr_dataA, tr_dataB, tr_y, te_dataA, te_dataB, te_y):    

    base_network = create_base_network(feature_dim,seq_len)

    input_a = Input(shape=(seq_len,feature_dim,))
    input_b = Input(shape=(seq_len,feature_dim))

    processed_a = base_network(input_a)
    processed_b = base_network(input_b)

    distance = Lambda(cosine_distance, output_shape=cosine_dist_output_shape)([processed_a, processed_b])

    model = Model([input_a, input_b], distance)

    adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
    model.compile(optimizer=adam, loss=contrastive_loss)
    model.fit([tr_dataA, tr_dataB], tr_y,
              batch_size=128,
              epochs=epochs,
              validation_data=([te_dataA, te_dataB], te_y))


    pred = model.predict([tr_dataA, tr_dataB])
    tr_acc = compute_accuracy(pred, tr_y)
    for i in range(len(pred)):
        print (pred[i], tr_y[i])


    return model


def padding(max_len, embedding):
    for i in range(len(embedding)):
        padding = np.zeros(max_len-embedding[i].shape[0])
        embedding[i] = np.concatenate((embedding[i], padding))

    embedding = np.array(embedding)
    return embedding

def getAB(sentences_A,sentences_B, feature_dim, word2idx, idx2word, weights,max_len_def=0):
    #from_sentence_to_array : function that transforms natural language sentences 
    #into vectors of real numbers. Each word is replaced with the corrisponding word2vec 
    #embedding, and words that aren't in the embedding are replaced with zeros vector.  
    embedding_A, max_len_A = from_sentence_to_array(sentences_A,word2idx, idx2word, weights)
    embedding_B, max_len_B = from_sentence_to_array(sentences_B,word2idx, idx2word, weights)

    max_len = max(max_len_A, max_len_B,max_len_def*feature_dim)

    #padding to max_len
    embedding_A = padding(max_len, embedding_A)
    embedding_B = padding(max_len, embedding_B)

    seq_len = int(max_len/feature_dim)
    print(seq_len)

    #rashape
    data_A = embedding_A.reshape((len(embedding_A), seq_len, feature_dim))
    data_B = embedding_B.reshape((len(embedding_B), seq_len, feature_dim))

    print('A,B shape: ',data_A.shape, data_B.shape)

    return data_A, data_B, seq_len



FEATURE_DIMENSION = 100
MIN_COUNT = 10
WINDOW = 5

if __name__ == '__main__':

    data = pd.read_csv('data\\train.csv', sep='\t')
    sentences_A = data['sentence_A']
    sentences_B = data['sentence_B']
    tr_y = 1- data['relatedness_score']/5

    if not (os.path.exists(EMBEDDING_PATH)  and os.path.exists(VOCAB_PATH)):    
        create_embeddings(embeddings_path=EMBEDDING_PATH, vocab_path=VOCAB_PATH,  size=FEATURE_DIMENSION, min_count=MIN_COUNT, window=WINDOW, sg=1, iter=25)
    word2idx, idx2word, weights = load_vocab_and_weights(VOCAB_PATH,EMBEDDING_PATH)

    tr_dataA, tr_dataB, seq_len = getAB(sentences_A,sentences_B, FEATURE_DIMENSION,word2idx, idx2word, weights)

    test = pd.read_csv('data\\test.csv', sep='\t')
    test_sentences_A = test['sentence_A']
    test_sentences_B = test['sentence_B']
    te_y = 1- test['relatedness_score']/5

    te_dataA, te_dataB, seq_len = getAB(test_sentences_A,test_sentences_B, FEATURE_DIMENSION,word2idx, idx2word, weights, seq_len) 

    model = siamese(FEATURE_DIMENSION, seq_len, 10, tr_dataA, tr_dataB, tr_y, te_dataA, te_dataB, te_y)


    test_a = ['this is my dog']
    test_b = ['this dog is mine']
    a,b,seq_len = getAB(test_a,test_b, FEATURE_DIMENSION,word2idx, idx2word, weights, seq_len)
    prediction  = model.predict([a, b])
    print(prediction)
Run Code Online (Sandbox Code Playgroud)

一些结果:

my prediction | true label 
0.849908 0.8
0.849908 0.8
0.849908 0.74
0.849908 0.76
0.849908 0.66
0.849908 0.72
0.849908 0.64
0.849908 0.8
0.849908 0.78
0.849908 0.8
0.849908 0.8
0.849908 0.8
0.849908 0.8
0.849908 0.74
0.849908 0.8
0.849908 0.8
0.849908 0.8
0.849908 0.66
0.849908 0.8
0.849908 0.66
0.849908 0.56
0.849908 0.8
0.849908 0.8
0.849908 0.76
0.847546 0.78
0.847546 0.8
0.847546 0.74
0.847546 0.76
0.847546 0.72
0.847546 0.8
0.847546 0.78
0.847546 0.8
0.847546 0.72
0.847546 0.8
0.847546 0.8
0.847546 0.78
0.847546 0.8
0.847546 0.78
0.847546 0.78
0.847546 0.46
0.847546 0.72
0.847546 0.8
0.847546 0.76
0.847546 0.8
0.847546 0.8
0.847546 0.8
0.847546 0.8
0.847546 0.74
0.847546 0.8
0.847546 0.72
0.847546 0.68
0.847546 0.56
0.847546 0.8
0.847546 0.78
0.847546 0.78
0.847546 0.8
0.852975 0.64
0.852975 0.78
0.852975 0.8
0.852975 0.8
0.852975 0.44
0.852975 0.72
0.852975 0.8
0.852975 0.8
0.852975 0.76
0.852975 0.8
0.852975 0.8
0.852975 0.8
0.852975 0.78
0.852975 0.8
0.852975 0.8
0.852975 0.78
0.852975 0.8
0.852975 0.8
0.852975 0.76
0.852975 0.8
Run Code Online (Sandbox Code Playgroud)

Yu-*_*ang 5

您看到连续相等的值是因为函数的输出形状cosine_distance错误。当您K.mean(...)不带axis参数进行取值时,结果是一个标量。要修复它,只需使用K.mean(..., axis=-1)incosine_distance替换K.mean(...)

更详细的解释:

调用时,首先预先分配model.predict()输出数组,然后用批量预测填充。pred从源代码training.py

if batch_index == 0:
    # Pre-allocate the results arrays.
    for batch_out in batch_outs:
        shape = (num_samples,) + batch_out.shape[1:]
        outs.append(np.zeros(shape, dtype=batch_out.dtype))
for i, batch_out in enumerate(batch_outs):
    outs[i][batch_start:batch_end] = batch_out
Run Code Online (Sandbox Code Playgroud)

在您的情况下,您只有单个输出,因此predouts[0]在上面的代码中。当batch_out是标量(例如,如结果中所示的 0.847546)时,上面的代码相当于pred[batch_start:batch_end] = 0.847576. 由于 的默认批量大小为 32 model.predict(),因此您可以看到 32 个连续的 0.847576 值出现在您发布的结果中。


另一个可能更大的问题是标签错误。您可以通过 将相关性分数转换为标签tr_y = 1- data['relatedness_score']/5。现在,如果两个句子“非常相似”,则相关性得分为 5,因此tr_y这两个句子的相关性得分为 0。

然而,在对比损失中,当y_true为零时,该术语K.maximum(margin - y_pred, 0)实际上意味着“这两个句子应该有一个余弦距离>= margin”。K.square这与您希望模型学习的内容相反(而且我认为您在损失中不需要)。