使用TensorFlow和Keras进行图像分类

Lak*_*n C 5 python deep-learning conv-neural-network keras tensorflow

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K


# dimensions of our images.
img_width, img_height = 150, 150


train_data_dir = 'flowers/train'
validation_data_dir = 'flowers/validation'
nb_train_samples = 2500
nb_validation_samples = 1000
epochs = 20
batch_size = 50


if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)


model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(5))
model.add(Activation('softmax'))


model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])


# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='categorical')


validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='categorical')


model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)


model.save_weights('first_flowers_try.h5')
Run Code Online (Sandbox Code Playgroud)

我们训练了这个模型来分类5个图像类.我们为每个类使用500个图像来训练模型,并为每个类使用200个图像来验证模型.我们在tensorflow后端使用了keras.它使用的数据可以在以下网址下载:https://www.kaggle.com/alxmamaev/flowers-recognition

在我们的设置中,我们:

  • 创建了一个数据/文件夹
  • 在数据中创建了train /和validation /子文件夹/
  • 在火车里面创建雏菊/,蒲公英/,玫瑰/,向日葵/和郁金香/子文件夹/验证/
  • 将500张图像放在每个数据/火车/雏菊,蒲公英,玫瑰,向日葵和郁金香中
  • 将200个图像放在每个数据/验证/菊花,蒲公英,玫瑰,向日葵和郁金香中.因此,我们每个类有500个训练样例,每个类有200个验证示例.

我们如何使用这种训练模型预测/测试和识别另一个图像?

len*_*nik 2

您必须model.load_weights()从保存它们的文件中进行操作。然后,您获得需要预测的样本图像,并调用model.predict( [sample_image] )并使用返回的结果作为预测。