Abi*_*n K 370 python ocr opencv numpy computer-vision
我正在尝试在OpenCV-Python(cv2)中实现"数字识别OCR".它仅用于学习目的.我想在OpenCV中学习KNearest和SVM功能.
我有每个数字的100个样本(即图像).我想和他们一起训练.
letter_recog.pyOpenCV示例附带了一个示例.但我仍然无法弄清楚如何使用它.我不明白什么是样本,响应等.另外,它首先加载一个txt文件,我首先不明白.
稍后搜索一下,我可以在cpp示例中找到一个letter_recognition.data.我使用它并在letter_recog.py模型中为cv2.KNearest创建了一个代码(仅用于测试):
import numpy as np
import cv2
fn = 'letter-recognition.data'
a = np.loadtxt(fn, np.float32, delimiter=',', converters={ 0 : lambda ch : ord(ch)-ord('A') })
samples, responses = a[:,1:], a[:,0]
model = cv2.KNearest()
retval = model.train(samples,responses)
retval, results, neigh_resp, dists = model.find_nearest(samples, k = 10)
print results.ravel()
Run Code Online (Sandbox Code Playgroud)
它给了我一个20000的数组,我不明白它是什么.
问题:
1)letter_recognition.data文件是什么?如何从我自己的数据集构建该文件?
2)什么results.reval()表示?
3)我们如何使用letter_recognition.data文件(KNearest或SVM)编写简单的数字识别工具?
Abi*_*n K 507
好吧,我决定在我的问题上自己解决以解决上述问题.我想要的是在OpenCV中使用KNearest或SVM功能实现简化的OCR.下面是我做了什么以及如何做.(它仅用于学习如何将KNearest用于简单的OCR目的).
1)我的第一个问题是关于OpenCV样本附带的letter_recognition.data文件.我想知道那个文件里面有什么.
它包含一个字母,以及该字母的16个特征.
并this SOF帮助我找到它.这16个特征在本文中进行了解释Letter Recognition Using Holland-Style Adaptive Classifiers.(虽然我最后还不了解一些功能)
2)因为我知道,如果不了解所有这些功能,就很难做到这一点.我试了一些其他的论文,但对初学者来说都有点困难.
So I just decided to take all the pixel values as my features. (我并不担心准确性或性能,我只是想让它起作用,至少准确度最低)
我在下面的图片中找到了我的训练数据:

(我知道训练数据的数量较少.但是,由于所有字母都是相同的字体和大小,我决定尝试这个).
为了准备培训数据,我在OpenCV中编写了一个小代码.它做了以下事情:
key press manually.这次我们按下数字键,对应于方框中的字母.在手动数字分类结束时,列车数据(train.png)中的所有数字都由我们自己手动标记,图像如下所示:

以下是我用于上述目的的代码(当然,不是那么干净):
import sys
import numpy as np
import cv2
im = cv2.imread('pitrain.png')
im3 = im.copy()
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
################# Now finding Contours ###################
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
samples = np.empty((0,100))
responses = []
keys = [i for i in range(48,58)]
for cnt in contours:
if cv2.contourArea(cnt)>50:
[x,y,w,h] = cv2.boundingRect(cnt)
if h>28:
cv2.rectangle(im,(x,y),(x+w,y+h),(0,0,255),2)
roi = thresh[y:y+h,x:x+w]
roismall = cv2.resize(roi,(10,10))
cv2.imshow('norm',im)
key = cv2.waitKey(0)
if key == 27: # (escape to quit)
sys.exit()
elif key in keys:
responses.append(int(chr(key)))
sample = roismall.reshape((1,100))
samples = np.append(samples,sample,0)
responses = np.array(responses,np.float32)
responses = responses.reshape((responses.size,1))
print "training complete"
np.savetxt('generalsamples.data',samples)
np.savetxt('generalresponses.data',responses)
Run Code Online (Sandbox Code Playgroud)
现在我们进入培训和测试部分.
对于测试我在下面使用的图像部分,它具有我用来训练的相同类型的字母.

对于培训我们做如下:
出于测试目的,我们执行以下操作:
我在下面的单个代码中包含了最后两个步骤(培训和测试):
import cv2
import numpy as np
####### training part ###############
samples = np.loadtxt('generalsamples.data',np.float32)
responses = np.loadtxt('generalresponses.data',np.float32)
responses = responses.reshape((responses.size,1))
model = cv2.KNearest()
model.train(samples,responses)
############################# testing part #########################
im = cv2.imread('pi.png')
out = np.zeros(im.shape,np.uint8)
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if cv2.contourArea(cnt)>50:
[x,y,w,h] = cv2.boundingRect(cnt)
if h>28:
cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
roi = thresh[y:y+h,x:x+w]
roismall = cv2.resize(roi,(10,10))
roismall = roismall.reshape((1,100))
roismall = np.float32(roismall)
retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1)
string = str(int((results[0][0])))
cv2.putText(out,string,(x,y+h),0,1,(0,255,0))
cv2.imshow('im',im)
cv2.imshow('out',out)
cv2.waitKey(0)
Run Code Online (Sandbox Code Playgroud)
它起作用,下面是我得到的结果:

在这里它以100%的准确度工作.我认为这是因为所有的数字都是相同种类和相同的大小.
但无论如何,这对初学者来说是个好开始(我希望如此).
Har*_*ris 49
对于那些对C++代码感兴趣的人可以参考下面的代码.感谢Abid Rahman的好解释.
该过程与上述相同,但轮廓查找仅使用第一层级轮廓,因此算法仅对每个数字使用外轮廓.
//Process image to extract contour
Mat thr,gray,con;
Mat src=imread("digit.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,200,255,THRESH_BINARY_INV); //Threshold to find contour
thr.copyTo(con);
// Create sample and label data
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
Mat sample;
Mat response_array;
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); //Find contour
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through first hierarchy level contours
{
Rect r= boundingRect(contours[i]); //Find bounding rect for each contour
rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,0,255),2,8,0);
Mat ROI = thr(r); //Crop the image
Mat tmp1, tmp2;
resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR ); //resize to 10X10
tmp1.convertTo(tmp2,CV_32FC1); //convert to float
sample.push_back(tmp2.reshape(1,1)); // Store sample data
imshow("src",src);
int c=waitKey(0); // Read corresponding label for contour from keyoard
c-=0x30; // Convert ascii to intiger value
response_array.push_back(c); // Store label to a mat
rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,255,0),2,8,0);
}
// Store the data to file
Mat response,tmp;
tmp=response_array.reshape(1,1); //make continuous
tmp.convertTo(response,CV_32FC1); // Convert to float
FileStorage Data("TrainingData.yml",FileStorage::WRITE); // Store the sample data in a file
Data << "data" << sample;
Data.release();
FileStorage Label("LabelData.yml",FileStorage::WRITE); // Store the label data in a file
Label << "label" << response;
Label.release();
cout<<"Training and Label data created successfully....!! "<<endl;
imshow("src",src);
waitKey();
Run Code Online (Sandbox Code Playgroud)
Mat thr,gray,con;
Mat src=imread("dig.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,200,255,THRESH_BINARY_INV); // Threshold to create input
thr.copyTo(con);
// Read stored sample and label for training
Mat sample;
Mat response,tmp;
FileStorage Data("TrainingData.yml",FileStorage::READ); // Read traing data to a Mat
Data["data"] >> sample;
Data.release();
FileStorage Label("LabelData.yml",FileStorage::READ); // Read label data to a Mat
Label["label"] >> response;
Label.release();
KNearest knn;
knn.train(sample,response); // Train with sample and responses
cout<<"Training compleated.....!!"<<endl;
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
//Create input sample by contour finding and cropping
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
Mat dst(src.rows,src.cols,CV_8UC3,Scalar::all(0));
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour for first hierarchy level .
{
Rect r= boundingRect(contours[i]);
Mat ROI = thr(r);
Mat tmp1, tmp2;
resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR );
tmp1.convertTo(tmp2,CV_32FC1);
float p=knn.find_nearest(tmp2.reshape(1,1), 1);
char name[4];
sprintf(name,"%d",(int)p);
putText( dst,name,Point(r.x,r.y+r.height) ,0,1, Scalar(0, 255, 0), 2, 8 );
}
imshow("src",src);
imshow("dst",dst);
imwrite("dest.jpg",dst);
waitKey();
Run Code Online (Sandbox Code Playgroud)
在结果中,第一行中的点被检测为8,并且我们没有训练过点.此外,我正在考虑第一层级中的每个轮廓作为样本输入,用户可以通过计算区域来避免它.

Yon*_*son 11
如果您对机器学习的最新技术感兴趣,您应该研究深度学习.您应该拥有支持GPU的CUDA,或者在Amazon Web Services上使用GPU.
Google Udacity使用Tensor Flow提供了一个很好的教程.本教程将教您如何在手写数字上训练自己的分类器.使用Convolutional Networks,我在测试集上获得了超过97%的准确率.