Rec*_*şen 6 python python-3.x deep-learning keras tensorflow
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.1,
zoom_range=0.1,
rotation_range=5.,
width_shift_range=0.1,
height_shift_range=0.1)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size = (img_width, img_height),
batch_size = 20,
shuffle = True,
classes = TYPES,
class_mode = 'categorical')
validation_generator = val_datagen.flow_from_directory(
val_data_dir,
target_size=(img_width, img_height),
batch_size=20,
shuffle = True,
classes = TYPES,
class_mode = 'categorical')
model.fit_generator(
train_generator,
samples_per_epoch = 2000,
nb_epoch = 20
)
Epoch 14/50
480/2000 [======>.......................] - ETA: 128s - loss: 0.8708
Epoch 13/50
2021/2000 [==============================] - 171s - loss: 0.7973 - acc: 0.7041
Run Code Online (Sandbox Code Playgroud)
我的ImageGenerators从文件夹中读取2261次训练和567次测试图像.我试图用2000 samples_per_epoch和20 batch_size训练我的模型.对于samples_per_epoch,Batch_size可以被整除,但不知何故它正在添加额外的值并显示警告:
(UserWarning:Epoch包含的不仅仅是
samples_per_epoch
样本,可能会影响学习结果.samples_per_epoch
正确设置以避免此警告).
它适用于Single-Gpu但如果我尝试使用Multi-Gpus进行训练,则会出现以下错误:
InvalidArgumentError(参见上面的回溯):不兼容的形状:[21] vs. [20] [[Node:Equal = Equal [T = DT_INT64,_device ="/ job:localhost/replica:0/task:0/gpu:0 "](ArgMax,ArgMax_1)]] [[节点:渐变/ concat_25_grad/Slice_1/_10811 = _Recvclient_terminated = false,recv_device ="/ job:localhost/replica:0/task:0/gpu:1",send_device ="/ job:localhost/replica:0/task:0/cpu:0",send_device_incarnation = 1,tensor_name ="edge_101540_gradients/concat_25_grad/Slice_1",tensor_type = DT_FLOAT,_device ="/ job:localhost/replica:0/task:0/GPU:1" ]]
我正在使用该代码进行模型并行化:
谢谢你的帮助...
小智 0
训练数据数量必须等于samples_per_epoch \xc3\x97\xe3\x80\x80batch_size。\n请将训练数据数量减少1条数据,使其达到2260。\nsteps_per_epoch=113\nbatch_size=20
\n 归档时间: |
|
查看次数: |
472 次 |
最近记录: |