如何确定由 Keras 上的卷积神经网络预测的二元类?

RFT*_*xas 5 python machine-learning text-classification deep-learning keras

我正在构建一个 CNN 来对 Keras 进行情感分析。一切正常,模型经过训练,可以投入生产。

但是,当我尝试使用该方法预测新的未标记数据时,model.predict()它只输出相关的概率。我尝试使用该方法,np.argmax()但它总是输出 0,即使它应该是 1(在测试集上,我的模型达到了 80% 的准确度)。

这是我预处理数据的代码:

# Pre-processing data
x = df[df.Sentiment != 3].Headlines
y = df[df.Sentiment != 3].Sentiment

# Splitting training, validation, testing dataset
x_train, x_validation_and_test, y_train, y_validation_and_test = train_test_split(x, y, test_size=.3,
                                                                                      random_state=SEED)
x_validation, x_test, y_validation, y_test = train_test_split(x_validation_and_test, y_validation_and_test,
                                                                  test_size=.5, random_state=SEED)

tokenizer = Tokenizer(num_words=NUM_WORDS)
tokenizer.fit_on_texts(x_train)

sequences = tokenizer.texts_to_sequences(x_train)
x_train_seq = pad_sequences(sequences, maxlen=MAXLEN)

sequences_val = tokenizer.texts_to_sequences(x_validation)
x_val_seq = pad_sequences(sequences_val, maxlen=MAXLEN)

sequences_test = tokenizer.texts_to_sequences(x_test)
x_test_seq = pad_sequences(sequences_test, maxlen=MAXLEN)
Run Code Online (Sandbox Code Playgroud)

这是我的模型:

MAXLEN = 25
NUM_WORDS = 5000
VECTOR_DIMENSION = 100

tweet_input = Input(shape=(MAXLEN,), dtype='int32')

tweet_encoder = Embedding(NUM_WORDS, VECTOR_DIMENSION, input_length=MAXLEN)(tweet_input)

# Combinating n-gram to optimize results
bigram_branch = Conv1D(filters=100, kernel_size=2, padding='valid', activation="relu", strides=1)(tweet_encoder)
bigram_branch = GlobalMaxPooling1D()(bigram_branch)
trigram_branch = Conv1D(filters=100, kernel_size=3, padding='valid', activation="relu", strides=1)(tweet_encoder)
trigram_branch = GlobalMaxPooling1D()(trigram_branch)
fourgram_branch = Conv1D(filters=100, kernel_size=4, padding='valid', activation="relu", strides=1)(tweet_encoder)
fourgram_branch = GlobalMaxPooling1D()(fourgram_branch)
merged = concatenate([bigram_branch, trigram_branch, fourgram_branch], axis=1)

merged = Dense(256, activation="relu")(merged)
merged = Dropout(0.25)(merged)
output = Dense(1, activation="sigmoid")(merged)

optimizer = optimizers.adam(0.01)

model = Model(inputs=[tweet_input], outputs=[output])
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=['accuracy'])
model.summary()

# Training the model
history = model.fit(x_train_seq, y_train, batch_size=32, epochs=5, validation_data=(x_val_seq, y_validation))
Run Code Online (Sandbox Code Playgroud)

我还尝试将最终 Dense 层上的激活次数从 1 更改为 2,但出现错误:

Error when checking target: expected dense_12 to have shape (2,) but got array with shape (1,)
Run Code Online (Sandbox Code Playgroud)

tod*_*day 7

你在做二元分类。所以你有一个 Dense 层,由一个单元组成,激活函数为sigmoid。Sigmoid 函数输出 [0,1] 范围内的值,该值对应于给定样本属于正类(即第一类)的概率。低于 0.5 的所有内容都标记为零(即负类),而高于 0.5 的所有内容都标记为 1。因此,要找到预测的类,您可以执行以下操作:

preds = model.predict(data)
class_one = preds > 0.5
Run Code Online (Sandbox Code Playgroud)

的真实元素class_one对应于标有 1(即正类)的样本。

奖励:要找到预测的准确性,您可以轻松地class_one与真实标签进行比较:

acc = np.mean(class_one == true_labels)
Run Code Online (Sandbox Code Playgroud)

请注意,我假设它true_labels由零和一组成。


此外,如果您的模型是使用 Sequential 类定义的,那么您可以轻松使用predict_classes方法:

pred_labels = model.predict_classes(data)
Run Code Online (Sandbox Code Playgroud)

但是,由于您使用 Keras 函数式 API 来构建模型(在我看来,这样做非常好),因此您不能使用predict_classes方法,因为它对此类模型定义不明确。