Sop*_*nck 7 python machine-learning deep-learning keras tensorflow
我正在使用keras在经过预训练的网络上应用转移学习。我有带有二进制类别标签的图像补丁,并且想使用CNN预测范围为[0; 1]用于看不见的图像补丁。
设置:32个批次,转换大小 层数:16
结果:经过几个时期,我的准确度已经接近1,而损失接近0,而在验证数据上,准确度保持在0.5,并且每个时期的损失都在变化。最后,CNN会针对所有看不见的补丁预测仅一个类别。
以下策略可以减少过度拟合:
我尝试了批量大小最大为512的示例,并且更改了全连接层的大小,但没有取得太大的成功。在随机测试其余部分之前,我想问一下如何调查出什么问题了,以找出上述哪种策略最具潜力。
在我的代码下面:
def generate_data(imagePathTraining, imagesize, nBatches):
datagen = ImageDataGenerator(rescale=1./255)
generator = datagen.flow_from_directory\
(directory=imagePathTraining, # path to the target directory
target_size=(imagesize,imagesize), # dimensions to which all images found will be resize
color_mode='rgb', # whether the images will be converted to have 1, 3, or 4 channels
classes=None, # optional list of class subdirectories
class_mode='categorical', # type of label arrays that are returned
batch_size=nBatches, # size of the batches of data
shuffle=True) # whether to shuffle the data
return generator
def create_model(imagesize, nBands, nClasses):
print("%s: Creating the model..." % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
# Create pre-trained base model
basemodel = ResNet50(include_top=False, # exclude final pooling and fully connected layer in the original model
weights='imagenet', # pre-training on ImageNet
input_tensor=None, # optional tensor to use as image input for the model
input_shape=(imagesize, # shape tuple
imagesize,
nBands),
pooling=None, # output of the model will be the 4D tensor output of the last convolutional layer
classes=nClasses) # number of classes to classify images into
print("%s: Base model created with %i layers and %i parameters." %
(datetime.now().strftime('%Y-%m-%d_%H-%M-%S'),
len(basemodel.layers),
basemodel.count_params()))
# Create new untrained layers
x = basemodel.output
x = GlobalAveragePooling2D()(x) # global spatial average pooling layer
x = Dense(16, activation='relu')(x) # fully-connected layer
y = Dense(nClasses, activation='softmax')(x) # logistic layer making sure that probabilities sum up to 1
# Create model combining pre-trained base model and new untrained layers
model = Model(inputs=basemodel.input,
outputs=y)
print("%s: New model created with %i layers and %i parameters." %
(datetime.now().strftime('%Y-%m-%d_%H-%M-%S'),
len(model.layers),
model.count_params()))
# Freeze weights on pre-trained layers
for layer in basemodel.layers:
layer.trainable = False
# Define learning optimizer
optimizerSGD = optimizers.SGD(lr=0.01, # learning rate.
momentum=0.0, # parameter that accelerates SGD in the relevant direction and dampens oscillations
decay=0.0, # learning rate decay over each update
nesterov=False) # whether to apply Nesterov momentum
# Compile model
model.compile(optimizer=optimizerSGD, # stochastic gradient descent optimizer
loss='categorical_crossentropy', # objective function
metrics=['accuracy'], # metrics to be evaluated by the model during training and testing
loss_weights=None, # scalar coefficients to weight the loss contributions of different model outputs
sample_weight_mode=None, # sample-wise weights
weighted_metrics=None, # metrics to be evaluated and weighted by sample_weight or class_weight during training and testing
target_tensors=None) # tensor model's target, which will be fed with the target data during training
print("%s: Model compiled." % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
return model
def train_model(model, nBatches, nEpochs, imagePathTraining, imagesize, nSamples, valX,valY, resultPath):
history = model.fit_generator(generator=generate_data(imagePathTraining, imagesize, nBatches),
steps_per_epoch=nSamples//nBatches, # total number of steps (batches of samples)
epochs=nEpochs, # number of epochs to train the model
verbose=2, # verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch
callbacks=None, # keras.callbacks.Callback instances to apply during training
validation_data=(valX,valY), # generator or tuple on which to evaluate the loss and any model metrics at the end of each epoch
class_weight=None, # optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function
max_queue_size=10, # maximum size for the generator queue
workers=32, # maximum number of processes to spin up when using process-based threading
use_multiprocessing=True, # whether to use process-based threading
shuffle=True, # whether to shuffle the order of the batches at the beginning of each epoch
initial_epoch=0) # epoch at which to start training
print("%s: Model trained." % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
return history
Run Code Online (Sandbox Code Playgroud)
根据以上建议,我修改如下:
ImageDataGenerator训练和验证相同) def generate_data(path, imagesize, nBatches):
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
generator = datagen.flow_from_directory(directory=path, # path to the target directory
target_size=(imagesize,imagesize), # dimensions to which all images found will be resize
color_mode='rgb', # whether the images will be converted to have 1, 3, or 4 channels
classes=None, # optional list of class subdirectories
class_mode='categorical', # type of label arrays that are returned
batch_size=nBatches, # size of the batches of data
shuffle=True, # whether to shuffle the data
seed=42) # random seed for shuffling and transformations
return generator
Run Code Online (Sandbox Code Playgroud)
def create_model(imagesize, nBands, nClasses):
# Create pre-trained base model
basemodel = VGG19(include_top=False, # exclude final pooling and fully connected layer in the original model
weights='imagenet', # pre-training on ImageNet
input_tensor=None, # optional tensor to use as image input for the model
input_shape=(imagesize, # shape tuple
imagesize,
nBands),
pooling=None, # output of the model will be the 4D tensor output of the last convolutional layer
classes=nClasses) # number of classes to classify images into
# Freeze weights on pre-trained layers
for layer in basemodel.layers:
layer.trainable = False
# Create new untrained layers
x = basemodel.output
x = GlobalAveragePooling2D()(x) # global spatial average pooling layer
x = Dense(1024, activation='relu')(x) # fully-connected layer
x = Dropout(rate=0.8)(x) # dropout layer
y = Dense(nClasses, activation='softmax')(x) # logistic layer making sure that probabilities sum up to 1
# Create model combining pre-trained base model and new untrained layers
model = Model(inputs=basemodel.input,
outputs=y)
# Define learning optimizer
optimizerSGD = optimizers.SGD(lr=0.001, # learning rate.
momentum=0.9, # parameter that accelerates SGD in the relevant direction and dampens oscillations
decay=learningRate/nEpochs, # learning rate decay over each update
nesterov=True) # whether to apply Nesterov momentum
# Compile model
model.compile(optimizer=optimizerSGD, # stochastic gradient descent optimizer
loss='categorical_crossentropy', # objective function
metrics=['accuracy'], # metrics to be evaluated by the model during training and testing
loss_weights=None, # scalar coefficients to weight the loss contributions of different model outputs
sample_weight_mode=None, # sample-wise weights
weighted_metrics=None, # metrics to be evaluated and weighted by sample_weight or class_weight during training and testing
target_tensors=None) # tensor model's target, which will be fed with the target data during training
return model
Run Code Online (Sandbox Code Playgroud)
def train_model(model, nBatches, nEpochs, trainGenerator, valGenerator, resultPath):
history = model.fit_generator(generator=trainGenerator,
steps_per_epoch=trainGenerator.samples // nBatches, # total number of steps (batches of samples)
epochs=nEpochs, # number of epochs to train the model
verbose=2, # verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch
callbacks=None, # keras.callbacks.Callback instances to apply during training
validation_data=valGenerator, # generator or tuple on which to evaluate the loss and any model metrics at the end of each epoch
validation_steps=
valGenerator.samples // nBatches, # number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch
class_weight=None, # optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function
max_queue_size=10, # maximum size for the generator queue
workers=1, # maximum number of processes to spin up when using process-based threading
use_multiprocessing=False, # whether to use process-based threading
shuffle=True, # whether to shuffle the order of the batches at the beginning of each epoch
initial_epoch=0) # epoch at which to start training
return history, model
Run Code Online (Sandbox Code Playgroud)
通过这些修改,在训练 100 个 epoch 后,我在批量大小为 32 的情况下实现了以下指标:
train_acc:0.831train_loss:0.436val_acc:0.692val_loss:0.568我认为这些设置是最佳的,因为:
train_accval_acc仅在 30 个 epoch 后才超过train_acc和之间的微小差异val_acc)train_loss并val_loss不断减少然而,我想知道:
val_acc更多的过度拟合成本sklearn.metrics classification_report()predict_generator()也许,我应该更好地就这些问题提出一个新问题。