Dai*_*isy 4 python generator deep-learning keras
由于训练数据中最后一个批次的尺寸较小,我正在使用定制的批次生成器来尝试在使用标准 model.fit() 函数时解决形状不兼容的问题(BroadcastGradientArgs 错误)。我使用了这里提到的批处理生成器和 model.fit_generator() 函数:
class Generator(Sequence):
# Class is a dataset wrapper for better training performance
def __init__(self, x_set, y_set, batch_size=256):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.indices = np.arange(self.x.shape[0])
def __len__(self):
return math.floor(self.x.shape[0] / self.batch_size)
def __getitem__(self, idx):
inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size] #Line A
batch_x = self.x[inds]
batch_y = self.y[inds]
return batch_x, batch_y
def on_epoch_end(self):
np.random.shuffle(self.indices)
Run Code Online (Sandbox Code Playgroud)
但是,如果它的大小小于提供的批处理大小,它似乎会丢弃最后一个批处理。如何更新它以包含最后一批并使用一些重复样本对其进行扩展(例如)?
另外,不知何故,我不明白“A 行”是如何工作的!
更新: 这是我如何在我的模型中使用生成器:
# dummy model
input_1 = Input(shape=(None,))
...
dense_1 = Dense(10, activation='relu')(input_1)
output_1 = Dense(1, activation='sigmoid')(dense_1)
model = Model(input_1, output_1)
print(model.summary())
#Compile and fit_generator
model.compile(optimizer='adam', loss='binary_crossentropy')
train_data_gen = Generator(x1_train, y_train, batch_size)
test_data_gen = Generator(x1_test, y_test, batch_size)
model.fit_generator(generator=train_data_gen, validation_data = test_data_gen, epochs=epochs, shuffle=False, verbose=1)
loss, accuracy = model.evaluate_generator(generator=test_data_gen)
print('Test Loss: %0.5f Accuracy: %0.5f' % (loss, accuracy))
Run Code Online (Sandbox Code Playgroud)
我认为罪魁祸首是这条线
return math.floor(self.x.shape[0] / self.batch_size)
Run Code Online (Sandbox Code Playgroud)
用这个替换它可能有效
return math.ceil(self.x.shape[0] / self.batch_size)
Run Code Online (Sandbox Code Playgroud)
想象一下,如果您有 100 个样本和 32 个批次。它应该分为 3.125 个批次。但是如果你使用math.floor,它会变成 3 并且不和谐 0.125。
对于 A 行,如果批大小为 32,当索引为 1 时,[idx * self.batch_size:(idx + 1) * self.batch_size]将变为[32:64],换句话说,选择第 33 到 64 个元素self.indices
**更新 2,将输入更改为 None 形状并使用 LSTM 并添加评估
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ""
import math
import numpy as np
from keras.models import Model
from keras.utils import Sequence
from keras.layers import Input, Dense, LSTM
class Generator(Sequence):
# Class is a dataset wrapper for better training performance
def __init__(self, x_set, y_set, batch_size=256):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.indices = np.arange(self.x.shape[0])
def __len__(self):
return math.ceil(self.x.shape[0] / self.batch_size)
def __getitem__(self, idx):
inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size] # Line A
batch_x = self.x[inds]
batch_y = self.y[inds]
return batch_x, batch_y
def on_epoch_end(self):
np.random.shuffle(self.indices)
# dummy model
input_1 = Input(shape=(None, 10))
x = LSTM(90)(input_1)
x = Dense(10)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input_1, x)
print(model.summary())
# Compile and fit_generator
model.compile(optimizer='adam', loss='binary_crossentropy')
x1_train = np.random.rand(1590, 20, 10)
x1_test = np.random.rand(90, 20, 10)
y_train = np.random.rand(1590, 1)
y_test = np.random.rand(90, 1)
train_data_gen = Generator(x1_train, y_train, 256)
test_data_gen = Generator(x1_test, y_test, 256)
model.fit_generator(generator=train_data_gen,
validation_data=test_data_gen,
epochs=5,
shuffle=False,
verbose=1)
loss = model.evaluate_generator(generator=test_data_gen)
print('Test Loss: %0.5f' % loss)
Run Code Online (Sandbox Code Playgroud)
这运行没有任何问题。
| 归档时间: |
|
| 查看次数: |
4418 次 |
| 最近记录: |