我正在处理内存不足的大型数据集,因此被引入了Dask数据框。我从文档中了解到,Dask不会将整个数据集加载到内存中。相反,它创建了多个线程,这些线程将根据需要从磁盘中获取记录。因此,我假设批处理大小为500的keras模型,在训练时它在内存中应该只有500条记录。但是当我开始训练时。这需要永远。可能是我做错了。请提出建议。
训练数据的形状:1000000 * 1290
import glob
import dask.dataframe
paths_train = glob.glob(r'x_train_d_final*.csv')
X_train_d = dd.read_csv('.../x_train_d_final0.csv')
Y_train1 = keras.utils.to_categorical(Y_train.iloc[,1], num_classes)
batch_size = 500
num_classes = 2
epochs = 5
model = Sequential()
model.add(Dense(645, activation='sigmoid', input_shape=(1290,),kernel_initializer='glorot_normal'))
#model.add(Dense(20, activation='sigmoid',kernel_initializer='glorot_normal'))
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=Adam(decay=0),
metrics=['accuracy'])
history = model.fit(X_train_d.to_records(), Y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
class_weight = {0:1,1:6.5},
shuffle=False)
Run Code Online (Sandbox Code Playgroud)