每当训练模型时,内核都会死掉

9 python keras tensorflow

这是代码:

# import libraries
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense

# import dataset
from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator()

test_datagen = ImageDataGenerator()

training_set = train_datagen.flow_from_directory(
                                            'data/spectrogramme/ensemble_de_formation',
                                            target_size = (64, 64),
                                            batch_size = 128,
                                            class_mode = 'binary')

test_set = test_datagen.flow_from_directory('data/spectrogramme/ensemble_de_test',
                                            target_size = (64, 64),
                                            batch_size = 128,
                                            class_mode = 'binary')

# initializing
reseau = Sequential()

# 1. convolution
reseau.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
reseau.add(MaxPooling2D(pool_size = (2, 2)))
reseau.add(Conv2D(32, (3, 3), activation = 'relu'))
reseau.add(MaxPooling2D(pool_size = (2, 2)))
reseau.add(Conv2D(64, (3, 3), activation = 'relu'))
reseau.add(MaxPooling2D(pool_size = (2, 2)))
reseau.add(Conv2D(64, (3, 3), activation = 'relu'))
reseau.add(MaxPooling2D(pool_size = (2, 2)))

# 2. flatenning
reseau.add(Flatten())

# 3. fully connected
from keras.layers import Dropout
reseau.add(Dense(units = 64, activation = 'relu'))
reseau.add(Dropout(0.1))
reseau.add(Dense(units = 128, activation = 'relu'))
reseau.add(Dropout(0.05))
reseau.add(Dense(units = 256, activation = 'relu'))
reseau.add(Dropout(0.03))
reseau.add(Dense(units = 1, activation = 'sigmoid'))

# 4. compile
reseau.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

# 5. fit
reseau.fit_generator(training_set, steps_per_epoch = 8000, epochs = 1,
                     validation_data = test_set, validation_steps = 2000)
Run Code Online (Sandbox Code Playgroud)

这应该证明我有安装了 CUDA 和 CUDNN 的 tensorflow GPU pic

我不知道该怎么办,我已经多次重新安装 CUDA 和 CUDNN

但是,如果我卸载 tensorflow-gpu,程序将完美运行……除了每个 epoch 需要 5000 秒……我想避免这种情况

仅供参考,这一切都发生在 Windows 上

任何帮助表示赞赏。

小智 0

如果您使用 Jupyter,请检查是否有任何正在运行的笔记本,并且我发现它们即使在主动运行时也会占用 GPU 内存。

在 jupyter 中关闭所有未使用的正在运行的程序。