Ale*_*lex 16 c++ python deep-learning caffe cudnn
使用Fast/Faster-RCNN和Caffe在C++上制作对象检测器的最简单方法是什么?
众所周知,我们可以使用跟随RCNN(基于区域的卷积神经网络)和Caffe:
RCNN:https://github.com/BVLC/caffe/blob/be163be0ea5befada208dbf0db29e6fa5811dc86/python/caffe/detector.py#L174
快速RCNN:https://github.com/rbgirshick/fast-rcnn/blob/master/tools/demo.py#L89
scores, boxes = im_detect(net, im, obj_proposals) 哪个叫 def im_detect(net, im, boxes):
使用rbgirshick/caffe-fast-rcnn,ROIPooling-layers和输出bbox_pred
scores, boxes = im_detect(net, im) 哪个叫 def im_detect(net, im, boxes=None):
使用rbgirshick/caffe-fast-rcnn,ROIPooling-layers和输出bbox_pred
所有这些都使用Python和Caffe,但是如何在C++和Caffe上做到这一点?
分类只有C++示例(在图像上说什么),但是没有用于检测(表示图像上的内容和位置):https://github.com/BVLC/caffe/tree/master/examples/cpp_classification
用rbgirshick/caffe-fast-rcnn简单地克隆rbgirshick/py-faster-rcnn存储库
就足够了,下载预先设定的模型,使用这个coco/VGG16/faster_rcnn_end2end/test.prototxt并在CaffeNet C++分类中做了一些小改动例子?./data/scripts/fetch_faster_rcnn_models.sh
如何从两层bbox_pred和cls_score获取输出数据?
我是否将所有(bbox_pred和cls_score)放在一个数组中:
const vector<Blob<float>*>& output_blobs = net_->ForwardPrefilled();
Blob<float>* output_layer = output_blobs[0];
const float* begin = output_layer->cpu_data();
const float* end = begin + output_layer->channels();
std::vector<float> bbox_and_score_array(begin, end);
Run Code Online (Sandbox Code Playgroud)
还是两个阵列?
const vector<Blob<float>*>& output_blobs = net_->ForwardPrefilled();
Blob<float>* bbox_output_layer = output_blobs[0];
const float* begin_b = bbox_output_layer ->cpu_data();
const float* end_b = begin_b + bbox_output_layer ->channels();
std::vector<float> bbox_array(begin_b, end_b);
Blob<float>* score_output_layer = output_blobs[1];
const float* begin_c = score_output_layer ->cpu_data();
const float* end_c = begin_c + score_output_layer ->channels();
std::vector<float> score_array(begin_c, end_c);
Run Code Online (Sandbox Code Playgroud)