我用两个RELU隐藏层+线性激活层创建了ANN,并试图逼近简单的ln(x)函数.我不能做到这一点.我很困惑,因为x:[0.0-1.0]范围内的lx(x)应该没有问题地近似(我使用学习率0.01和基本梯度下降优化).
import tensorflow as tf
import numpy as np
def GetTargetResult(x):
curY = np.log(x)
return curY
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# # Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Parameters
learning_rate = 0.01
training_epochs = 10000
batch_size = 50
display_step = 500 …Run Code Online (Sandbox Code Playgroud) regression neural-network gradient-descent deep-learning tensorflow
我目前正在构建一个卷积神经网络,以区分清晰的ECG图像和ECG图像与噪声.
有噪音:
没有噪音:
我的问题
因此,我确实使用了高于tensorflow的keras构建了一个convnet,并对其进行了多次训练,但始终如此,它具有99%的训练准确度,99%的验证准确度和98%的测试精度.但是当我预测一张图片时,它总是会给我[0].
大多数时候,我的模型早期停留在3或4纪元,在训练和验证方面都有99%的准确率.几乎所有的时间在第一纪元或第二纪元都给出了98%或99%的准确度.
我的模特
from keras.models import Sequential
from keras.datasets import mnist
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation,Dropout,Flatten,Dense
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard
from keras.layers import ZeroPadding2D
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping
tensorboard = TensorBoard(log_dir="./logs",histogram_freq=0,write_graph=True,write_images=True)
earlystop = EarlyStopping(monitor='val_loss',patience=2,verbose=1)
# Variables
batchSize = 15
num_of_samples = 15000
num_of_testing_samples = 3750
num_of_val_samples = 2000
training_imGenProp = ImageDataGenerator(rescale = 1./255,
width_shift_range=0.02,
height_shift_range=0.02,
horizontal_flip=False,
fill_mode='nearest'
)
testing_imGenProp = …Run Code Online (Sandbox Code Playgroud) 我目前正试图在CIFAR 100上使用Keras得到一个不错的分数(> 40%的准确率).然而,我正在经历一个CNN模型的奇怪行为:它倾向于预测一些类(2 - 5)比其他:
位置(i,j)处的像素包含计数来自类i的验证集的多少元素被预测为类j的计数.因此,对角线包含正确的分类,其他一切都是错误.两个垂直条表示模型经常预测那些类,尽管情况并非如此.
CIFAR 100完美平衡:所有100个班级都有500个训练样本.
为什么模型比其他类更倾向于预测某些类?怎么解决这个问题?
运行这需要一段时间.
#!/usr/bin/env python
from __future__ import print_function
from keras.datasets import cifar100
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
import numpy as np
batch_size = 32
nb_classes = 100
nb_epoch = 50
data_augmentation = True
# input image dimensions
img_rows, img_cols = 32, 32
# The CIFAR10 images are RGB. …Run Code Online (Sandbox Code Playgroud)