Ank*_*ahu 6 c++ normalization neural-network deep-learning caffe
我是caffe的新手,我试图用Min-Max Normalization将卷积输出归一化到0到1之间.
Out = X - Xmin /(Xmax - Xmin)
我检查了很多层(功率,比例,批量标准化,MVN),但没有人给我层中的最小 - 最大标准化输出.谁能帮我 ??
*************我的原型文件*****************
name: "normalizationCheck"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 1 dim: 512 dim: 512 } }
}
layer {
name: "normalize1"
type: "Power"
bottom: "data"
top: "normalize1"
power_param {
shift: 0
scale: 0.00392156862
power: 1
}
}
layer {
bottom: "normalize1"
top: "Output"
name: "conv1"
type: "Convolution"
convolution_param {
num_output: 1
kernel_size: 1
pad: 0
stride: 1
bias_term: false
weight_filler {
type: "constant"
value: 1
}
}
}
Run Code Online (Sandbox Code Playgroud)
卷积层输出不是标准化形式我想要层格式的最小 - 最大标准化输出.手动我可以使用代码,但我需要在图层中.谢谢
您可以按照这些准则编写自己的 C++ 层,您将看到如何在该页面中实现“仅向前”层。
或者,您可以在 python 中实现该层,并通过“Python”层在 caffe 中执行它:
首先,在 python 中实现你的层,将其存储在'/path/to/my_min_max_layer.py'
:
import caffe
import numpy as np
class min_max_forward_layer(caffe.Layer):
def setup(self, bottom, top):
# make sure only one input and one output
assert len(bottom)==1 and len(top)==1, "min_max_layer expects a single input and a single output"
def reshape(self, bottom, top):
# reshape output to be identical to input
top[0].reshape(*bottom[0].data.shape)
def forward(self, bottom, top):
# YOUR IMPLEMENTATION HERE!!
in_ = np.array(bottom[0].data)
x_min = in_.min()
x_max = in_.max()
top[0].data[...] = (in_-x_min)/(x_max-x_min)
def backward(self, top, propagate_down, bottom):
# backward pass is not implemented!
pass
Run Code Online (Sandbox Code Playgroud)
一旦你在 python 中实现了该层,你就可以简单地将它添加到你的网络中(确保'/path/to'
在你的 中$PYTHONPATH
):
layer {
name: "my_min_max_forward_layer"
type: "Python"
bottom: "name_your_input_here"
top: "name_your_output_here"
python_param {
module: "my_min_max_layer" # name of python file to be imported
layer: "min_max_forward_layer" # name of layer class
}
}
Run Code Online (Sandbox Code Playgroud)