Ank*_*ahu 5 c++ neural-network deep-learning caffe conv-neural-network
我有这个网'RGB2GRAY.prototxt':
name: "RGB2GRAY"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 3 dim: 512 dim: 512 } }
}
layer {
name: "conv1"
bottom: "data"
top: "conv1"
type: "Convolution"
convolution_param {
num_output: 1
kernel_size: 1
pad: 0
stride: 1
bias_term: false
weight_filler {
type: "constant"
value: 1
}
}
}
Run Code Online (Sandbox Code Playgroud)
我正在尝试使用此公式将RGB转换为灰色的自己的网络
x = 0.299r + 0.587g + 0.114b.
Run Code Online (Sandbox Code Playgroud)
基本上,我可以使用自定义权重(0.299,0.587,0.114)进行内核大小为1的卷积.但我没有得到如何修改卷积层.我设置了权重和偏差,但无法修改过滤器值.我尝试过以下方法,但无法更新卷积滤镜.
shared_ptr<Net<float> > net_;
net_.reset(new Net<float>("path of model file", TEST));
const shared_ptr<Blob<float> >& conv_blob = net_->blob_by_name("conv1");
float* conv_weight = conv_blob->mutable_cpu_data();
conv_weight[0] = 0.299;
conv_weight[1] = 0.587;
conv_weight[2] = 0.114;
net_->Forward();
//for dumping the output
const shared_ptr<Blob<float> >& probs = net_->blob_by_name("conv1");
const float* probs_out = probs->cpu_data();
cv::Mat matout(height, width, CV_32F);
for (size_t i = 0; i < height; i++)
{
for (size_t j = 0; j < width; j++)
{
matout.at<float>(i, j) = probs_out[i* width + j];
}
}
matout.convertTo(matout, CV_8UC1);
cv::imwrite("gray.bmp", matout);
Run Code Online (Sandbox Code Playgroud)
在python中,我发现自定义卷积滤镜更容易,但我需要C++中的解决方案.
只需在您的 C++ 代码中进行一些小更改:
// access the convolution layer by its name
const shared_ptr<Layer<float> >& conv_layer = net_->layer_by_name("conv1");
// access the layer's blob that stores weights
shared_ptr<Blob<float> >& weight = conv_layer->blobs()[0];
float* conv_weight = weight->mutable_cpu_data();
conv_weight[0] = 0.299;
conv_weight[1] = 0.587;
conv_weight[2] = 0.114;
Run Code Online (Sandbox Code Playgroud)
事实上,“ ”指的是代码中conv1卷积层的输出,而不是包含权重,的功能是返回存储网络内层之间的中间结果。blobblobNet<Dtype>::blob_by_name(const string& blob_name)blob