Real time QR Code / Bar code detection with webcam using OpenCV and ZBar

Categories Computer Vision, Uncategorized

Tutorial: Real time QR Code / Bar code detection using webcam video feed / stream using OpenCV and ZBar

Pre-requisites:

You will need to have installed OpenCV and ZBar (see previous tutorials) for this to work.

Source on Github:  https://github.com/ayoungprogrammer/WebcamCodeScanner

Code:

 #include <opencv2/highgui/highgui.hpp>  
 #include <opencv2/imgproc/imgproc.hpp>  
 #include <zbar.h>  
 #include <iostream>  
 using namespace cv;  
 using namespace std;  
 using namespace zbar;  
 //g++ main.cpp /usr/local/include/ /usr/local/lib/ -lopencv_highgui.2.4.8 -lopencv_core.2.4.8  
 int main(int argc, char* argv[])  
 {  
   VideoCapture cap(0); // open the video camera no. 0  
   // cap.set(CV_CAP_PROP_FRAME_WIDTH,800);  
   // cap.set(CV_CAP_PROP_FRAME_HEIGHT,640);  
   if (!cap.isOpened()) // if not success, exit program  
   {  
     cout << "Cannot open the video cam" << endl;  
     return -1;  
   }  
   ImageScanner scanner;   
    scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);   
   double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video  
   double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video  
   cout << "Frame size : " << dWidth << " x " << dHeight << endl;  
   namedWindow("MyVideo",CV_WINDOW_AUTOSIZE); //create a window called "MyVideo"  
   while (1)  
   {  
     Mat frame;  
     bool bSuccess = cap.read(frame); // read a new frame from video  
      if (!bSuccess) //if not success, break loop  
     {  
        cout << "Cannot read a frame from video stream" << endl;  
        break;  
     }  
     Mat grey;  
     cvtColor(frame,grey,CV_BGR2GRAY);  
     int width = frame.cols;   
     int height = frame.rows;   
     uchar *raw = (uchar *)grey.data;   
     // wrap image data   
     Image image(width, height, "Y800", raw, width * height);   
     // scan the image for barcodes   
     int n = scanner.scan(image);   
     // extract results   
     for(Image::SymbolIterator symbol = image.symbol_begin();   
     symbol != image.symbol_end();   
     ++symbol) {   
         vector<Point> vp;   
     // do something useful with results   
     cout << "decoded " << symbol->get_type_name() << " symbol "" << symbol->get_data() << '"' <<" "<< endl;   
       int n = symbol->get_location_size();   
       for(int i=0;i<n;i++){   
         vp.push_back(Point(symbol->get_location_x(i),symbol->get_location_y(i)));   
       }   
       RotatedRect r = minAreaRect(vp);   
       Point2f pts[4];   
       r.points(pts);   
       for(int i=0;i<4;i++){   
         line(frame,pts[i],pts[(i+1)%4],Scalar(255,0,0),3);   
       }   
       //cout<<"Angle: "<<r.angle<<endl;   
     }   
     imshow("MyVideo", frame); //show the frame in "MyVideo" window  
     if (waitKey(30) == 27) //wait for 'esc' key press for 30ms. If 'esc' key is pressed, break loop  
     {  
       cout << "esc key is pressed by user" << endl;  
       break;   
     }  
   }  
   return 0;  
 }  

To Test

Find any QR code or bar code and hold it close to your webcam and it should pick up.

Extacting Regions of Interest using Page Markers

Categories Computer Vision, Uncategorized

Source on GitHub: https://github.com/ayoungprogrammer/OMR-Example

Introduction

Optical Mark Recognition is recognizing certain “marks” on an image and using those marks as a reference point to extract other regions of interest (ROI) on the page. OMR is a relatively new technology and there is close to no documentation on the subject. Current OMR technologies like ScanTron require custom machines designed specifically to scan custom sheets of paper. These methods work well but the cost to produce the machines and paper is high as well as the inflexibility. Hopefully I can provide some insight into creating an efficient and effective OMR algorithm that uses standard household scanners and a simple template.

An OMR algorithm first needs a template page to know where ROI’s are in relation to the markers. It then needs to be able to scan a page and recognize where the markers are. Then using the template, the algorithm can determine where the ROI’s are in relation to the markers. In the case of ScanTrons, the markers are the black lines on the sides and ROI’s are the bubbles that are checked.

For an effective OMR, the markers should be at least halfway across the page from each other (either vertically or horizontally). The further apart the markers are, the higher accuracy you will achieve.

For the simplicity of this tutorial, we will use two QR codes with one in each corner as the markers. This will be our template:

Opening the template in Paint, we can find the coordinate of the ROI’s and markers.
Markers:
Top right point of first QR code:
1084,76
Bottom left point of second QR code:
77,1436
Region of Interests (ROI’s)
Name box:
(223,105) -> (603,152)
Payroll # box:
(223,152)->(603, 198)
Sin box:
(223, 198)->(603,244)
Address box:
(223,244)->(603,290)
Postal box:
(223, 291)->(603,336)
Picture:
(129,491) -> (766,806)

Using the coordinate we can do some simple math to find the relative positioning of the ROI’s.

We can also find the angle of rotation from the markers. If we find the angle between the top right corner and bottom left corner of the template markers we get: 53.48222 degrees. If we find that the markers we scan have is something different from that angle, we rotate the whole page by that angle, it will fix the skewed rotation.

Scanned image:

OMR Processed Image + Fixed rotation

Extensions

Two QR Codes in each corner looks ugly but there are many other types of markers you can use.
Once you have the coordinates of the ROI’s you can easily extract them and possibly OCR the data you need.
If you want to OMR a page where you have no control over the template you need to do some heuristics to find some sort of markers on the page (for example looking for a logo or line detection).
You can easily add an extension for multiple choice or checkboxes and extract the ROI to determine the selection.
In real applications you will want to create your own template dynamically and encode the ROI data somewhere so you do not have to manually enter the coordinates of the marker and ROI’s.

Source Code

Source on Github: https://github.com/ayoungprogrammer/OMR-Example

 

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <zbar.h>
#include <iostream>

using namespace cv;
using namespace std;
using namespace zbar;

//g++ main.cpp /usr/local/include/ /usr/local/lib/ -lopencv_highgui.2.4.8 -lopencv_core.2.4.8

void drawQRCodes(Mat img,Image& image){
  // extract results 
  for(Image::SymbolIterator symbol=image.symbol_begin(); symbol != image.symbol_end();++symbol) { 
    vector<Point> vp; 

    //draw QR Codes
    int n = symbol->get_location_size(); 
    for(int i=0;i<n;i++){ 
      vp.push_back(Point(symbol->get_location_x(i),symbol->get_location_y(i))); 
    } 
    RotatedRect r = minAreaRect(vp); 
    Point2f pts[4]; 
    r.points(pts); 
    //Display QR code
    for(int i=0;i<4;i++){ 
      line(img,pts[i],pts[(i+1)%4],Scalar(255,0,0),3); 
    } 
  } 
}

Rect makeRect(float x,float y,float x2,float y2){
  return Rect(Point2f(x,y),Point2f(x2,y2));
}

Point2f rotPoint(Point2f p,Point2f o,double rad){
  Point2f p1 = Point2f(p.x-o.x,p.y-o.y);

  return Point2f(p1.x * cos(rad)-p1.y*sin(rad)+o.x,p1.x*sin(rad)+p1.y*cos(rad)+o.y);
}

void drawRects(Mat& img,Point2f rtr,Point2f rbl){
  vector<Rect> rects;

  Point2f tr(1084,76);
  Point2f bl(77,1436);

  rects.push_back(makeRect(223,105,603,152));
  rects.push_back(makeRect(223,152,603,198));
  rects.push_back(makeRect(223,198,603,244));
  rects.push_back(makeRect(223,244,603,290));
  rects.push_back(makeRect(223,291,603,336));

  rects.push_back(makeRect(129,491,765,806));


  //Fix rotation angle
  double angle = atan2(tr.y-bl.y,tr.x-bl.x);
  double realAngle = atan2(rtr.y-rbl.y,rtr.x-rbl.x);

  double angleShift = -(angle-realAngle);

  //Rotate image
  Point2f rc((rtr.x+rbl.x)/2,(rbl.y+rtr.y)/2);
  Mat rotMat = getRotationMatrix2D(rc,angleShift/3.14159265359*180.0,1.0);
  warpAffine(img,img,rotMat,Size(img.cols,img.rows),INTER_CUBIC,BORDER_TRANSPARENT);

  rtr = rotPoint(rtr,rc,-angleShift);
  rbl = rotPoint(rbl,rc,-angleShift);

  //Calculate ratio between template and real image
  double realWidth = rtr.x-rbl.x;
  double realHeight = rbl.y-rtr.y;

  double width = tr.x-bl.x;
  double height = bl.y - tr.y;

  double wr = realWidth/width;
  double hr = realHeight/height;

  circle(img,rbl,3,Scalar(0,255,0),2);
  circle(img,rtr,3,Scalar(0,255,0),2);

  for(int i=0;i<rects.size();i++){
    Rect r = rects[i];
    double x1 = (r.x-tr.x)*wr+rtr.x;
    double y1 = (r.y-tr.y)*hr+rtr.y;
    double x2 = (r.x+r.width-tr.x)*wr +rtr.x;
    double y2 = (r.y+r.height-tr.y)*hr + rtr.y;
    rectangle(img,Point2f(x1,y1),Point2f(x2,y2),Scalar(0,0,255),3);
    //circle(img,Point2f(x1,y1),3,Scalar(0,0,255));
  }
}

int main(int argc, char* argv[])
{
  Mat img = imread(argv[1]);

  ImageScanner scanner; 
  scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1); 

  namedWindow("OMR",CV_WINDOW_AUTOSIZE); //create a window

  Mat grey;
  cvtColor(img,grey,CV_BGR2GRAY);

  int width = img.cols; 
  int height = img.rows; 
  uchar *raw = (uchar *)grey.data; 
  // wrap image data 
  Image image(width, height, "Y800", raw, width * height); 
  // scan the image for barcodes 
  scanner.scan(image); 

  //Top right point
  Point2f tr(0,0);
  Point2f bl(0,0);

  // extract results 
  for(Image::SymbolIterator symbol = image.symbol_begin(); symbol != image.symbol_end();++symbol) { 
    vector<Point> vp; 

   //Find TR point
   if(tr.y==0||tr.y>symbol->get_location_y(3)){
     tr = Point(symbol->get_location_x(3),symbol->get_location_y(3));
   }

   //Find BL point
   if(bl.y==0||bl.y<symbol->get_location_y(1)){
     bl = Point(symbol->get_location_x(1),symbol->get_location_y(1));
   }
  } 

  drawQRCodes(img,image);
  drawRects(img,tr,bl);
  imwrite("omr.jpg", img); 

  return 0;
}

Hosting multiple websites on one server with Apache 2.4 (including Node) on Ubuntu 12.04

Categories Uncategorized
While I was setting up my VPS (Virtual Private Server) Ubuntu 12.04. I had a lot of difficulties with running my existing Node.js server with Apache 2.4.

I had multiple domains that I wanted to host on one server, one was a Node.js server and the other was Apache 2.4.

The solution took me a bit of some Googling and trial and error.

Go to /etc/apache2/sites-enabled/000-default.conf

To set up with node, we need to enable proxypass:

sudo a2enmod proxy
sudo a2enmod proxy-http

Edit it as such:

NameVirtualHost *

<VirtualHost *:80>
    ServerName myapachedomain.ca
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html/mywebpage1
</VirtualHost>

<VirtualHost *:80>
    ServerName myapachesite2.com
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html/mywebpage2
</VirtualHost>

### FOR NODE.JS SERVER
<VirtualHost *:80>
     ServerName mynodedomain.com
     ProxyPass / http://localhost:3000/
</VirtualHost>

Your node.js server should be running on localhost at port 3000

Tutorial: Setting up and Installing the MEAN stack

Categories Uncategorized

Tutorial: Setting up the MEAN stack

The MEAN stack: (MongoDB, ExpressJS, AngularJS and Node.JS) are a group of powerful technologies that allow you to write a completely functional website from back-end to front-end using only Javascript. Using only Javascript allows developers to only in one language instead of managing several different languages (such as PHP or Ruby) from front and back end. Javascript does have its own pitfalls, but it is still a powerful language is utilized correctly.
MongoDB: Open source NoSQL database
ExpressJS: Web application framework for node (serves front end)
NodeJS: Fast efficient nonblocking backend 
AngularJS: Front end for enhancing web apps

Project available on GitHub: https://github.com/ayoungprogrammer/meanTemplate

Install Eclipse IDE

For web development, I find Eclipse is very useful as it comes with a visual of the file system and can compile from the IDE. 

Install Node.JS

Download nodejs at:

Install ExpressJS

In console type:
npm install express -g
This will install express globally on your machine.
You may need to use 
sudo npm install express -g

Install Nodeclipse for using Node.js in Eclipse

Follow instructions at:

Install MongoDB

Follow download instructions at:
For MacOSX you can use homebrew to install mongoDB quickly:
brew install mongodb
Create your first Express project
In Eclipse -> File -> New -> Express Project
Type in your new project name and click finish when done.
You should have a project that looks like this:
public
     |——-   stylesheets
                        |————-styles.css
routes
      |—————index.js
      |—————user.js
view
      |—————layout.jade
      |—————index.jade
app.js
package.json
README.md
Here is an explanation of what each thing does:
public: Everything in the public folder is served to the client by expressJS
stylesheets: Commonly, this folder will contain all the .css files for a website
styles.css: This is the current CSS file for the default webpage
routes: This folder contains the routes files for which requests are directed
index.js: This file contains the routes for index
user.js: This file contains the routes for users [this can be deleted]
view: This folder contains the views of the application
layout.jade: This file is the default template of a webpage
index.jade: This file is the index webpage
app.js This is the main file that node.js runs
package.json: This file tells node the project dependencies to install
README.md: This file tells another developer what the project is
      
The project currently uses the Jade templating engine to render pages. A template engine compiles source files into html files. 

Run the app

In console type:
node app.js
In your browser, type in the url: http://localhost:3000
If you have done everything correctly, then you should see this:

Express

Welcome to Express

Install Bower

Bower is a tool for installing other libraries similar to npm.
To install, type the following into a console:
npm install bower -g 
If you have errors, you may need to use 
sudo npm install bower -g 
Create a folder called public/js
This folder is where all the javascript files will be placed for the front end
Create another folder called public/js/vendor
This folder is where all the vendor Javascript libraries will be placed. Vendor means external 3rd party libraries such as AngularJS which we will be installing.
Create a file called .bowerrc in your project directory with the following:
{ “directory” : “public/js/vendor” }
Everything that bower installs will be put into /public/js/vendor.

Install AngularJS

We use bower to install AngularJS by typing in console:
bower install angular
This should install angularjs into public/js/vendor. At the time of writing this tutorial, the version is 1.2.3.
Install Mongoose
Mongoose is the api to connect to MongoDB.
We can install it by adding a dependency in package.json:
 “dependencies”: {
    “express”: “3.4.0”,
    “jade”: “*”,
    “mongoose”: “*”
  }
and putting into console from the project directory:
npm install 
NPM will automatically look at package.json and look for dependencies to install. 

Your tools are ready!

The MEAN stack tools are all ready and installed but the project does not do anything right now.
We will build a MEAN app in the next part of the tutorial.

Tutorial: Scanning Barcodes / QR Codes with OpenCV using ZBar

Categories Computer Vision, Uncategorized
With the ZBar library, scanning Barcodes / QR codes is quite simple. ZBar is able to identify multiple bar code /qr code types and able to give the coords of their locations.

This tutorial was written using:
Microsoft Visual Studio 2008 Express
OpenCV 2.4.2
Windows Vista 32-bit
ZBar 0.1

You will need OpenCV installed before doing this tutorial

Tutorial here: http://ayoungprogrammer.blogspot.ca/2012/10/tutorial-install-opencv-242-for-windows.html

1. Install ZBar (Windows Installer)

http://sourceforge.net/projects/zbar/files/zbar/0.10/zbar-0.10-setup.exe/download

Check install developmental libraries and headers (You will need this)

Install ZBar in the default directory
“C:Program FilesZBar”

2. Import headers and libraries

Tools ->Options 

Projects & Solutions -> VC++ Directories
Go to “Include files” and add: “C:Program FilesZBarinclude”

 Go to “Library files and add: “C:Program FilesZBarlib”

3. Link libraries in current project

Create an empy blank console project
Right click your project -> Properties -> Configuration Properties -> Linker -> Input

In additional dependencies copy and paste the following:
libzbar-0.lib
opencv_core242d.lib
opencv_imgproc242d.lib
opencv_highgui242d.lib
opencv_ml242d.lib
opencv_video242d.lib
opencv_features2d242d.lib
opencv_calib3d242d.lib
opencv_objdetect242d.lib
opencv_contrib242d.lib
opencv_legacy242d.lib
opencv_flann242d.lib

4. Test Program

Make a new file in your project: main.cpp
 #include "zbar.h"  
 #include "cv.h"  
 #include "highgui.h"  
 #include <iostream>  
 using namespace std;  
 using namespace zbar;  
 using namespace cv;  
 int main(void){  
      ImageScanner scanner;  
      scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);  
       // obtain image data  
      char file[256];  
      cin>>file;  
      Mat img = imread(file,0);  
      Mat imgout;  
      cvtColor(img,imgout,CV_GRAY2RGB);  
      int width = img.cols;  
      int height = img.rows;  
   uchar *raw = (uchar *)img.data;  
   // wrap image data  
   Image image(width, height, "Y800", raw, width * height);  
   // scan the image for barcodes  
   int n = scanner.scan(image);  
   // extract results  
   for(Image::SymbolIterator symbol = image.symbol_begin();  
     symbol != image.symbol_end();  
     ++symbol) {  
                vector<Point> vp;  
     // do something useful with results  
     cout << "decoded " << symbol->get_type_name()  
        << " symbol "" << symbol->get_data() << '"' <<" "<< endl;  
           int n = symbol->get_location_size();  
           for(int i=0;i<n;i++){  
                vp.push_back(Point(symbol->get_location_x(i),symbol->get_location_y(i))); 
           }  
           RotatedRect r = minAreaRect(vp);  
           Point2f pts[4];  
           r.points(pts);  
           for(int i=0;i<4;i++){  
                line(imgout,pts[i],pts[(i+1)%4],Scalar(255,0,0),3);  
           }  
           cout<<"Angle: "<<r.angle<<endl;  
   }  
      imshow("imgout.jpg",imgout);  
   // clean up  
   image.set_data(NULL, 0);  
       waitKey();  
 }  

5. Copy libzbar-0.dll from C:/Program Files/ZBar/bin to your project folder

6. Run program 

Sample Images

Tutorial: Detection / recognition of multiple rectangles and extracting with OpenCV

Categories Computer Vision, Uncategorized

 This tutorial will be focused on being able to take a picture and extract the rectangles in the image that are above a certain size:

I am using OpenCV 2.4.2 on Microsoft Visual Express 2008 but it should work with other version as well.

Thanks to: opencv-code.com for their helpful guides

Step 1: Clean up

So once again, we’ll use my favourite snippet for cleaning up an image:
Apply a Gaussian blur and using an adaptive threshold for binarzing the image
//Apply blur to smooth edges and use adapative thresholding  
 cv::Size size(3,3);  
 cv::GaussianBlur(img,img,size,0);  
 adaptiveThreshold(img, img,255,CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,75,10);  
 cv::bitwise_not(img, img);  

Step 2: Hough Line detection

Use a probabilistic Hough line detection to figure out where the lines are. This algorithm works by going through every point in the image and checking every angle. 
 vector<Vec4i> lines;  
 HoughLinesP(img, lines, 1, CV_PI/180, 80, 100, 10);  
And here we have the results of the algorithm:

Step 3: Use connected components to determine what they shapes are

This is the most complex part of the algorithm (general pseudocode):
First, initialize every line to be in an undefined group
For every line compute the intersection of the two line segments (if they do not intersect ignore the point)
      If both lines are undefined, make a new group out of them
      If only one line is defined in a group, add the other line into the group. 
      If both lines are defined than add all the lines from one group into the other group
      If both lines are in the same group, do nothing
cv::Point2f computeIntersect(cv::Vec4i a, cv::Vec4i b)  
 {  
   int x1 = a[0], y1 = a[1], x2 = a[2], y2 = a[3];  
   int x3 = b[0], y3 = b[1], x4 = b[2], y4 = b[3];  
   if (float d = ((float)(x1-x2) * (y3-y4)) - ((y1-y2) * (x3-x4)))  
   {  
     cv::Point2f pt;  
     pt.x = ((x1*y2 - y1*x2) * (x3-x4) - (x1-x2) * (x3*y4 - y3*x4)) / d;  
     pt.y = ((x1*y2 - y1*x2) * (y3-y4) - (y1-y2) * (x3*y4 - y3*x4)) / d;  
           //-10 is a threshold, the POI can be off by at most 10 pixels
           if(pt.x<min(x1,x2)-10||pt.x>max(x1,x2)+10||pt.y<min(y1,y2)-10||pt.y>max(y1,y2)+10){  
                return Point2f(-1,-1);  
           }  
           if(pt.x<min(x3,x4)-10||pt.x>max(x3,x4)+10||pt.y<min(y3,y4)-10||pt.y>max(y3,y4)+10){  
                return Point2f(-1,-1);  
           }  
     return pt;  
   }  
   else  
     return cv::Point2f(-1, -1);  
 }  
Connected components
int* poly = new int[lines.size()];  
  for(int i=0;i<lines.size();i++)poly[i] = - 1;  
  int curPoly = 0;  
       vector<vector<cv::Point2f> > corners;  
      for (int i = 0; i < lines.size(); i++)  
      {  
           for (int j = i+1; j < lines.size(); j++)  
           {  
          
                cv::Point2f pt = computeIntersect(lines[i], lines[j]);  
                if (pt.x >= 0 && pt.y >= 0&&pt.x<img2.size().width&&pt.y<img2.size().height){  
              
                     if(poly[i]==-1&&poly[j] == -1){  
                          vector<Point2f> v;  
                          v.push_back(pt);  
                          corners.push_back(v);       
                          poly[i] = curPoly;  
                          poly[j] = curPoly;  
                          curPoly++;  
                          continue;  
                     }  
                     if(poly[i]==-1&&poly[j]>=0){  
                          corners[poly[j]].push_back(pt);  
                          poly[i] = poly[j];  
                          continue;  
                     }  
                     if(poly[i]>=0&&poly[j]==-1){  
                          corners[poly[i]].push_back(pt);  
                          poly[j] = poly[i];  
                          continue;  
                     }  
                     if(poly[i]>=0&&poly[j]>=0){  
                          if(poly[i]==poly[j]){  
                               corners[poly[i]].push_back(pt);  
                               continue;  
                          }  
                        
                          for(int k=0;k<corners[poly[j]].size();k++){  
                               corners[poly[i]].push_back(corners[poly[j]][k]);  
                          }  
                       
                          corners[poly[j]].clear();  
                          poly[j] = poly[i];  
                          continue;  
                     }  
                }  
           }  
      }  
The circles represent the points of intersection and the colours represent the different shapes. 

Step 4: Find corners of the polygon

Now we need to find corners of the polygons to get the polygon formed from the point of intersections.
Pseudocode:
For each group of points:
       Compute mass center (average of points)
        For each point that is above the mass center, add to top list
        For each point that is below the mass center, add to bottom list
        Sort top list and bottom list by x val
       first element of top list is  left most (top left point)
        last element of top list is right most (top right point) 
       first element of bottom list is  left most  (bottom left point)
       last element of bottom list is right most  (bottom right point) 

       

 bool comparator(Point2f a,Point2f b){  
           return a.x<b.x;  
      }  
 void sortCorners(std::vector<cv::Point2f>& corners, cv::Point2f center)  
 {  
   std::vector<cv::Point2f> top, bot;  
   for (int i = 0; i < corners.size(); i++)  
   {  
     if (corners[i].y < center.y)  
       top.push_back(corners[i]);  
     else  
       bot.push_back(corners[i]);  
   }  
      sort(top.begin(),top.end(),comparator);  
      sort(bot.begin(),bot.end(),comparator);  
   cv::Point2f tl = top[0];
   cv::Point2f tr = top[top.size()-1];
   cv::Point2f bl = bot[0];
   cv::Point2f br = bot[bot.size()-1];  
   corners.clear();  
   corners.push_back(tl);  
   corners.push_back(tr);  
   corners.push_back(br);  
   corners.push_back(bl);  
 }  
for(int i=0;i<corners.size();i++){  
           cv::Point2f center(0,0);  
           if(corners[i].size()<4)continue;  
           for(int j=0;j<corners[i].size();j++){  
                center += corners[i][j];  
           }  
           center *= (1. / corners[i].size());  
           sortCorners(corners[i], center);  
      }  

Step 5: Extraction

The final step is extract each rectangle from the image. We can do this quite easily with the perspective transform from OpenCV. To get an estimate of the dimensions of the rectangle we can use a bounding rectangle of the corners. If the dimensions of that rectangle are under our wanted area, we ignore the polygon. If the polygon also has less than 4 points we can ignore it as well. 
for(int i=0;i<corners.size();i++){  
           if(corners[i].size()<4)continue;  
           Rect r = boundingRect(corners[i]);  
           if(r.area()<50000)continue;  
           cout<<r.area()<<endl;  
           // Define the destination image  
           cv::Mat quad = cv::Mat::zeros(r.height, r.width, CV_8UC3);  
           // Corners of the destination image  
           std::vector<cv::Point2f> quad_pts;  
           quad_pts.push_back(cv::Point2f(0, 0));  
           quad_pts.push_back(cv::Point2f(quad.cols, 0));  
           quad_pts.push_back(cv::Point2f(quad.cols, quad.rows));  
           quad_pts.push_back(cv::Point2f(0, quad.rows));  
           // Get transformation matrix  
           cv::Mat transmtx = cv::getPerspectiveTransform(corners[i], quad_pts);  
           // Apply perspective transformation  
           cv::warpPerspective(img3, quad, transmtx, quad.size());  
           stringstream ss;  
           ss<<i<<".jpg";  
           imshow(ss.str(), quad);  
      }  

Tutorial: Creating a Multiple Choice Scanner with OpenCV

Categories Computer Vision, Uncategorized
EDIT (July 14, 2016): A better way to extract would be to use page markers.
This is a tutorial on creating a multiple choice scanner similar to the Scantron system. We will take a photo of a multiple choice answer sheet and we will find the corresponding letter of the bubbles. I will be using OpenCV 2.4.3 for this project.

Source code : https://github.com/ayoungprogrammer/MultipleChoiceScanner

Algorithm

We can split the algorithm into 9 parts:
1. Perform image preprocessing to make the image black & white (binarization)
2. Use hough transform to find the lines in the image
3. Find point of intersection of lines to form the quadrilateral
4. Apply a perspective transform to the quadrilateral
5. Use hough transform to find the circles in the image
6. Sort circles into rows and columns
7. Find circles with area 30% or denser and designate these as “filled in”
Thanks to this tutorial for helping me find POI and using perspective transformation

1. Image Preprocesssing

I like to use my favourite binarization method for cleaning up the image:
 – First apply a gaussian blur to blur the image a bit to get rid of random dots
 – Use adaptive thresholding to set each pixel to black or white
 cv::Size size(3,3);  
 cv::GaussianBlur(img,img,size,0);  
 adaptiveThreshold(img, img,255,CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,75,10);  
  cv::bitwise_not(img, img);

 

We get a nice clean image with distinct shapes marked in white. However, we do get a few dots of white but they shouldn’t affect anything.

2. Hough transfrom to get lines

Use a probabilistic Hough line detection to find the sides of the rectangle. It works by going to every point in the image and checking if a line exists for all the angles. This is the most expensive operation in the whole process because it has to check every point and angle.
 cv::Mat img2;  
  cvtColor(img,img2, CV_GRAY2RGB);  
  vector<Vec4i> lines;  
  HoughLinesP(img, lines, 1, CV_PI/180, 80, 400, 10);  
  for( size_t i = 0; i < lines.size(); i++ )  
  {  
   Vec4i l = lines[i];  
   line( img2, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, CV_AA);   
  }

3. Find POI of lines

From: http://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/

However, we need to sort the points from top left to bottom right:

 bool comparator(Point2f a,Point2f b){  
  return a.x<b.x;  
  }  
 void sortCorners(std::vector<cv::Point2f>& corners, cv::Point2f center)  
 {  
   std::vector<cv::Point2f> top, bot;  
   for (int i = 0; i < corners.size(); i++)  
   {  
     if (corners[i].y < center.y)  
       top.push_back(corners[i]);  
     else  
       bot.push_back(corners[i]);  
   }  
  sort(top.begin(),top.end(),comparator);  
  sort(bot.begin(),bot.end(),comparator);  
   cv::Point2f tl = top[0].x;  
   cv::Point2f tr = top[top.size()-1];  
   cv::Point2f bl = bot[0];  
   cv::Point2f br = bot[bot.size()-1];  
   corners.clear();  
   corners.push_back(tl);  
   corners.push_back(tr);  
   corners.push_back(br);  
   corners.push_back(bl);  
 }  
 // Get mass center  
  cv::Point2f center(0,0);  
  for (int i = 0; i < corners.size(); i++)  
  center += corners[i];  
  center *= (1. / corners.size());  
  sortCorners(corners, center);

4. Apply a perspective transform

At first I used a minimum area rectangle for extracting the region and cropping it but i got a slanted image. Because the picture was taken at an angle, the rectangle we took a picture of, has become a trapezoid. However, if you’re using a scanner, than this shouldn’t be too much an issue.
However, we can fix this with a perspective transform and OpenCV supplies a function for doing so.
 // Get transformation matrix  
  cv::Mat transmtx = cv::getPerspectiveTransform(corners, quad_pts);  
  // Apply perspective transformation  
  cv::warpPerspective(img3, quad, transmtx, quad.size());

5. Find circles

We use Hough transform to find all the circles using a provided function for detecting them.
 
cvtColor(img,cimg, CV_BGR2GRAY);
 vector<Vec3f> circles;  
   HoughCircles(cimg, circles, CV_HOUGH_GRADIENT, 1, img.rows/16, 100, 75, 0, 0 );  
     for( size_t i = 0; i < circles.size(); i++ )  
   {  
   Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));  
   int radius = cvRound(circles[i][2]);  
   // circle center  
   circle( testImg, center, 3, Scalar(0,255,0), -1, 8, 0 );  
   // circle outline

 

6. Sort circles into rows and columns

Now that we have the valid circles we should sort them into rows and columns. We can check if two circles are in a row with a simple test:
y1 = y coordinate of centre of circle 1
y2 = y coordinate of centre of circle 2
r = radius
y2-r > y1 and y2+r<y1
If two circles pass this test, then we can say that they are in the same row. We do this to all the circle until we have figure out which circles are in which rows.Row is an array of data about each row and index. The double part of the pair is the y coord of the row and the int is the index of arrays in bubble (used for sorting).

 vector<vector<Vec3f> > bubble;  
 vector<pair<double,int> > row;  
 for(int i=0;i<circles.size();i++){  
  bool found = false;  
  int r = cvRound(circles[i][2]);   
   int x = cvRound(circles[i][0]);  
   int y= cvRound(circles[i][1]);  
  for(int j=0;j<row.size();j++){  
 int y2 = row[j].first;  
   if(y-r<y2&&y+r>y2){  
   bubble[j].push_back(circles[i]);  
   found = true;  
   break;  
   }  
  }  
  if(!found){  
   int l = row.size();  
   row.push_back(make_pair(y,l));  
   vector<Vec3f> v;  
   v.push_back(circles[i]);  
   bubble.push_back(v);  
  }  
  found = false;  
  }

Then sort the rows by y coord and inside each row sort by x coord so you will have a order from top to bottom and left to right.

bool comparator2(pair<double,int> a,pair<double,int> b){  
  return a.first<b.first;  
 }  
 bool comparator3(Vec3f a,Vec3f b){  
  return a[0]<b[0];  
 }  
 ....  
 sort(row.begin(),row.end(),comparator2);  
 for(int i=0;i<bubble.size();i++){  
  sort(bubble[i].begin(),bubble[i].end(),comparator3);  
 }

7. Check bubble

Now that we have each circle sorted, in each row we can check if the density of pixels is 30% or higher which will indicate that it is filled in.
We can use countNonZero to count the filled in pixels over the area of the region.
In each row, we look for the highest filled density over 30% and it will most likely be the answer that is highlighted. However, if none are found then it is blank.
for(int i=0;i<row.size();i++){  
   double max = 0;  
   int ind = -1;  
   for(int j=0;j<bubble[row[i].second].size();j++){  
    Vec3f cir = bubble[row[i].second][j];  
    int r = cvRound(cir[2]);  
    int x = cvRound(cir[0]);  
    int y= cvRound(cir[1]);  
   Point c(x,y);  
    // circle outline  
   circle( img, c, r, Scalar(0,0,255), 3, 8, 0 );  
   Rect rect(x-r,y-r,2*r,2*r);  
   Mat submat = cimg(rect);  
   double p =(double)countNonZero(submat)/(submat.size().width*submat.size().height);  
   if(p>=0.3 && p>max){  
    max = p;  
    ind = j;  
   }  
   }  
       if(ind==-1)printf("%d:-",i+1);  
   else printf("%d:%c",i+1,'A'+ind);  
   cout<<endl;  
     }  
 }

Creating a Ribbon Add-in for Office Word in NetOffice Visual Studio Express 2008 without VSTO

Categories Uncategorized
Usually, to create an add-in for office, you will need Visual Studio Tools for Office (VSTO) which you will need to buy with Visual Studio Professional Edition which is something like $800. However, there are free alternative which work quite well: NetOffice. NetOffice is a free C# / VB library you can use to create your own ribbon add-ins for Office. In this tutorial we will be doing Word.

1. Install NetOffice

Extract the folder somewhere convenient

2. Create a project

Run NetOffice.DevelopToolbox
Go to VS Project Wizard tab -> Click New project 
Select automatic add-in under project type 
Select VS2008 Express under Environment
Select C# under Language (or VB if you want to use VB instead)
Version 3.5 .NET runtime
Select VS Project folder under project folder
Check off Word
Fill out the description of your add-in
Fill out additional options
Check out I want to customize Ribbon UI
Hit finish

3. Run Project

Open project with C# Visual Studio Express 2008
You should have these files already in:
Compile the project (Press F5)
You should get this error:

4. Run Office Word 2007

And we should have our first add-in working:
If you click any of the buttons, you will get a message box!

Equation OCR Tutorial Part 3: Making an OCR for Equations using OpenCV and Tesseract

Categories Computer Vision, Uncategorized

I’ll be doing a series on using OpenCV and Tesseract to take a scanned image of an equation and be able to read it in and graph it and give related data. I was surprised at how well the results turned out =)

I will be using versions OpenCV 2.4.2 and Tesseract OCR 3.02.02.

 I have also made two tutorials on installing Teseract and OpenCV for Vista x86 on Microsoft Visual Studio 2008 Express. However, you can go on the official sites for official documentation on installing the libraries on your system.

Parts

Equation OCR Part 1: Using contours to extract characters in OpenCV
Equation OCR Part 2: Training characters with Tesseract OCR
Equation OCR Part 3: Equation OCR

Tutorials

Installing OpenCV: http://blog.ayoungprogrammer.com/2012/10/tutorial-install-opencv-242-for-windows.html/

Installing Tesseract: http://blog.ayoungprogrammer.com/2012/11/tutorial-installing-tesseract-ocr-30202.html/

Official Links:

OpenCV : http://opencv.org/
Tesseract OCR: http://code.google.com/p/tesseract-ocr/

Overview:

The overall goal of the final program is to be able to convert the image of an equation into a text equation that we will be able to graph. We can break down this project into three parts, extracting characters from text, training for the OCR and recognition for converting images of equations into text.

Recognition

Recognition is easy once we have the training files we need for Tesseract. To initialize for our language and set recognition mode for characters:
tess_api.Init(“”, “mat”, tesseract::OEM_DEFAULT);
tess_api.SetPageSegMode(static_cast<tesseract::PageSegMode>(10));
After extracting all the characters we can use Tesseract on those single characters to get the recognized character. 
OpenCV uses a different data storage type from Tesseract but we can easily extract the raw data from a Mat to Tesseract. 
tess_api.TesseractRect( resizedPic .data, 1, resizedPic .step1(), 0, 0, resizedPic .cols, resizedPic .rows);
tess_api.SetImage(resizedPic .data,resizedPic.size().width,resizedPic .size().height,resizedPic .channels(),resizedPic .step1());
tess_api.Recognize(0);
const char* out=tess_api.GetUTF8Text();
In the output we should find a character for the recognized character. Since the characters have been sorted from left to right we can just append all these recognized characters into a string stream and output the final results.

Exponents

In a polynomial there are variables (x) , numbers brackets and exponents. The exponents can easily be found by checking if the bottom of a character reaches 2/3 of the way down to the bottom. If it doesn’t than it is probably superscript and we can put a ^ in front of the number to signify an exponent.The green line shows the 2/3 line to check. As you can see all the standard characters that are not exponents will go past the 2/3 line.

Wolfram

To send the equation to Wolfram Alpha I had to reverse the URL format they use which was quite simple. All URL’s begin with : “http://www.wolframalpha.com/input/?i=”. Numbers and letters map to themselves but other characters map to hexcodes:
if(eqn[i]==’+’)url<<“%2B”;
if(eqn[i]==’^’)url<<“%5E”;
if(eqn[i]==’=’)url<<“%3D”;
if(eqn[i]=='(‘)url<<“%28”;
if(eqn[i]==’)’)url<<“%29”;

Extensions

The program can be extended to work for other functions such as log, sin, cos, etc by doing some additional training for letters. It can also be extended to work for fraction bars although it takes some more work. You first look for any “bars” which are any shapes with width 3 times greater than length and you also check if there are shapes above and below the bar. When you do this, you want to take the longest bar first because you want to find the largest fraction first. Then you can recursively find fractions in the numerator and denominator of the fraction going from largest fraction to smallest fraction. Then you can just append to the string (numerator) / (denominator). However, there may be other terms that are not fractions to the left and right of the fraction and you will need to resort by x-coordinates.

Conclusion

In finishing this tutorial I hope you have learned how to use OCR and contours extraction as I certainly have. If you release any extensions of programs through my tutorials I hope you will credit me and also give me message. Thanks for reading!

Source code

Equation OCR Tutorial Part 2: Training characters with Tesseract OCR

Categories Computer Vision, Uncategorized
I’ll be doing a series on using OpenCV and Tesseract to take a scanned image of an equation and be able to read it in and graph it and give related data. I was surprised at how well the results turned out =)

I will be using versions OpenCV 2.4.2 and Tesseract OCR 3.02.02.

 I have also made two tutorials on installing Teseract and OpenCV for Vista x86 on Microsoft Visual Studio 2008 Express. However, you can go on the official sites for official documentation on installing the libraries on your system.

Parts

Equation OCR Part 1: Using contours to extract characters in OpenCV
Equation OCR Part 2: Training characters with Tesseract OCR
Equation OCR Part 3: Equation OCR

Tutorials

Installing OpenCV: http://blog.ayoungprogrammer.com/2012/10/tutorial-install-opencv-242-for-windows.html/

Installing Tesseract: http://blog.ayoungprogrammer.com/2012/11/tutorial-installing-tesseract-ocr-30202.html/

Official Links:

OpenCV : http://opencv.org/
Tesseract OCR: http://code.google.com/p/tesseract-ocr/

Overview:

The overall goal of the final program is to be able to convert the image of an equation into a text equation that we will be able to graph. We can break down this project into three parts, extracting characters from text, training for the OCR and recognition for converting images of equations into text.

Training

We will split the training process into two parts: classifying and Tesseracting. In classifying, we will use the extraction method in part one to create a program to generate training data for Tesseract. We will extract characters and have a user identify the character to be classified. The characters will go in folders labelled with the character name. For example all the 9’s will go in the “9” folder and all the x’s will go in the “x” folder. For the Tesseracting part, we will take our training data and run through the Tesseract training process so that the data can be used for OCR. 
Classifying

Classifying will take the longest time because the training data will need about 10 samples of each character. Our characters are the digits 0 to 9, left bracket, right bracket, plus signs and x. We will ignore dashes because they can be easily recognized as shapes with width three times greater than length. We can also ignore equal signs because they are just two dashes on top of another. With some slight modifications to the extraction program in part 1, we can make a training program for this. The training data I took required about 30 different images of equations. 

Classifying source:

http://pastebin.com/iJQsPh9L

Tesseracting
The original Tesseract training method is confusing to understand in their documentation and their method of training is very tedious. Their recommended training method consists of giving sample images and also in another data file, indicate the symbol and rectangle that corresponds to the character in the image. This as you can imagine becomes very tedious as you will need to find the coordinates and dimensions of the rectangle to the corresponding character. However, there are a few online GUI tools you can use to help with the process. I, on the other hand am very lazy and did not want to go through a hundred rectangles so I made a program that will generate an image with the training data and also generate the corresponding rectangles. The final result is something like this:

Source code here:

Now that we have finished created the training boxes, we can feed the results into the Tesseract engine for it to learn how to recognize the characters. Open up command prompt and go to the folder where your .tif file and file containing the rectangle data. Type in tesseract and hit enter. If it says command not found it means you did not install Tesseract properly.

To start the training: (mat for math)
tesseract mat.arial.exp0.tif mat.arial.exp0 nobatch box.train

Now you will see that Tesseract has generated a file called mat.arial.exp0.tr. Don’t touch the file. Next we will have to tell Tesseract which possible characters we are using. This can be generated by running:
uncharset_extractor mat.arial.exp0.box

Create a new file called font_properties (no file type like the unicharset, I just copied the unicharset file and save it under a new name called font_properties). Do not use notepad as it will mess up formatting. Use something like WordPad. Inside font_properties type in:

arial 1 0 0 1 0

Next to start mftraining:

mftraining -F font_properties -U unicharset mat.arial.exp0.tr

Shape clustering:
shapeclustering -F font_properties -U unicharset mat.arial.exp0.tr

mftraining again for shapetables:
mftraining -F font_properties -U unicharset mat.arial.exp0.tr

cntraining for clustering:

Now we have to combine all these files into one file. Now rename all the following files:

inttemp -> mat.inttemp
shapetable -> mat.shapetable
normproto -> mat.normproto
pffmtable -> mat.pffmtable
unicharset -> mat.unicharset

To generate your new tess data file:
combine_tessdata mat.

The final generated file is mat.traineddata. Move this file into the tessdata folder in the Tesseract installation folder so that the Tesseract library can access it -> C:Program FilesTesseract-OCRtessdata

To test go into one of your test data folders like “1” and run tesseract with your language file:

tesseract 1.jpg output -l mat -psm 10

In the output file you should see the character “1”. Congratulations, you have just trained your first OCR language!

Source codes:

Classifying characters:

http://pastebin.com/iJQsPh9L

Generating Tesseract training data: