I am in need of a component/code that will use the UIImagePicker (or the lower implementation that drive capturing image), so the user can take a picture with a rectangle area shown on screen. After the user take the picture, the area that was covered by the rectangle is the UIImage that I need to process.
So basically imagine, that when the camera screen appears, it should show an overlay on top of it which let the user see a rectangle and the control to take the picture/cancel taking the picture.
The rectangle is a guide for the user: the user need to take a picture of a document and some area of the document needs to be focus in the rectangle.
When the user has put the appropriate area of the document in the rectangle he can press the button to take the picture.
The components / code should then extract and return only the UIImage that match the rectangle. Your code will be integrated in an existing application so you will be delivering the source code in Objective-C. In order for me to verify the components work as expected, your code should be delivered in a small demo app that I can build locally with Xcode.
Please let me know how much experience do you have developing on iOS and if you have already developed components for handling pictures/camera.
If you use any third party code, it needs to be 100% open source.
Update: the project has evolved to be as follow: through the image capture on iOS and by using openCV the code need to detect a rectangle on the image being shown, and extract from the inside of that rectangle the 10 to 15 characters printed in there. The character recognition will be done through Tesseract OCR.
Code to deliver will be in Objective C and objC++ with all open source projects.
Tesseract OCR and OpenCV can be delivered as library with the instructions on how to compile them from the open source projects.