This project's objective is to use iOS devices cameras to read MRZ codes from passports (https://en.wikipedia.org/wiki/Machine-readable_passport).
We created a project using opencv and tesseract, got good results, but not good enough. We are able to find the text lines of the MRZ code, crop each line from a camera frame image and use tesseract but not with enough quality to read the information.
We believe to improve the results what should be done is:
- Deskew of the cropped text lines;
- Train tesseract using the correct font (OCR-B);
- Use multiple results to successfully read all the fields and verify each check digit;
We need to:
- Be able to activate debug mode to store cropped images and results for analysis;
- Use the code as a framework for other iOS apps. We should be able to add the framework to a new app and call it to start the camera, find and recognize, ocr, parse and verify check digits and then send the result back to the app. All of this steps should be transparent for the app code;
- Compatibility with iOS 8+