best approach for template matching of binary (edge) images - object-recognition

To all skimage and opencv gurus, given:
Scene Image
Template Image
What is the best approach to find the cross in the scene image ? These are output from smoothing, and canny filters.
Now, I tried all kinds of examples in skimage, and opencv template matching but the results are not satisfactory.
My ideal solution will be rotation, translation invariant (scale invariant will be a bonus) . Is there a way to just convert to contour points and them do a registration point cloud ? Will that be more accurate ? I thought about RANSAC but how do I give the inputs to RANSAC?

My approach to solving a similar problem was to create a large set of rotated and scaled variations of the template image and use opencv's matchTemplate function.
I would also recommend the preprocessing step of filling all detected and closed contours (for both template and scene image) white since the largely black template image might create false positives in the black regions of the scene image.


Arrow shape detection using OpenCV

I'm currently trying to detect an arrow and its orientation using OpenCV.
I've done the contour detection process that work fine but my problem is on the shape-matching side.
I tried to use matchShapes function in OpenCV but my results seems really bad.
I use a simple template image and a processed image (usually a photo but for the test I used a simple image)
Template Image:
Processed Image:
Using these two, matchShapes tells me that the square on the left looks more like the template that the arrow on the image.
I don't know where does this comes from.
Is matchShapes a bad function for this use? Someone told me to use SIFT algorithm but isn't it a little bit overkill for such a simple shape?
I would try to work with Image moment to find the shape. For that you have to find different propertys of the image region. And the best is would like to binarize the image.
First I tell you some techniques to describe a shape they don't have to do something with the image moment.
First there would be the circumference/perimeter of the shape. Computing the perimeter you need to sum the length of all contur elements of the shape. Next the area of the shape can be computed, you can count the pixels or take the shoelace distance like in this post. With the perimeter and the area, it is possible to calculate the circularity of the object/shape. More detailed you are able to create an bounding box and convex hull of it. Last not but not least the shape has a center of gravity. This are some propertys with taht you can build your own feature vectors, but you have to be a bit creative. Or you can use image moment.
Iam not sure if Image moment exists in OpenCV, the best and robust moment are the hu moment. I found some explenation here in stackoverflow: Meaning of the seven Hu invariant moments function from OpenCV. Hu moment er robust against rotation, translation and scale. So perfect for your problem.
At the end, I used SURF algorithm as I wanted to find more complex objects ;)

Rotation and scale invariant template matching in OpenCV [duplicate]

Possible Duplicate:
scale and rotation Template matching
I have a template grayscale image , with white background and black shape over it. I also have several similar test images which vary in rotation and in shape. The test images are not same as template but they are similar.
I want to compare these two images and see if template matches , is most similar to, any of the test images. There are no distortions , no noise and no other defects in the images. Are there any tutorials on this topic ?
Try the easiest method first.
If I understand you correctly you have some model - black shape over white background. You can treat it as blob - find it's mass center and rotation by computing principal axes angle - look there.
Then you must segment out shapes from other images. Then try to find the best corresponding shape with matchShapes() function - see there how to use it.
matchShapes() function makes scale and rotation invariant matching. The smallest match shapes result the better match.
Extending your question you can find mass center and rotation of best matching blob and find rotation, scale and displacement between your model and matched image.
This is quite a complex subject. You generally have options such as Generalized Hough Transform and Normalized Grayscale Correlation to deal with template matching. Problem is they are not scale or rotation invariant in their simplest expression. You need to focus on problem at the time, the generalized solution is complex. I recommend simple template matching first. Then add "hacks" for rotation and scale. For rotation you can downscale (low res. matching) and template match with rotated models. This can also deal with scale.

Matching a curve pattern to the edges of an image

I have a target image to be searched for a curve along its edges and a template image that contains the curve. What I need to achieve is to find the best match of the curve in the template image within the target image, and based on the score, to find out whether there is a match or not. That also includes rotation and resizing of the curve. The target image can be the output of a Canny Edge detector if that makes things easier.
I am considering to use OpenCV (by using Python or Processing/Java or if those have limited access to the required functions then by using C) to make things practical and efficient, however could not find out if I can use any functions (or a combination of them) in OpenCV that are useable for doing this job. I have been reading through the OpenCV documentation and thought at first that Contours could do this job, however all the examples show closed shapes as opposed to my case where I need to match a open curve to a part of an edge.
So is there a way to do this either by using OpenCV or with any known code or algorithm that you would suggest?
Here are some images to illustrate the problem:
My first thought was Generalized Hough Transform. However I don't know any good implementation for that.
I would try SIFT or SURF first on the canny edge image. It usually is used to find 2d areas, not 1d contours, but if you take the minimum bounding box around your contour and use that as the search pattern, it should work.
OpenCV has an implementation for that:
Features2D + Homography to find a known object
A problem may be getting a good edge image, those black shapes in the back could be distracting.
Also see this Stackoverflow answer:
Image Processing: Algorithm Improvement for 'Coca-Cola Can' Recognition

scale and rotation Template matching

I'm using the method of match template with CV_TM_CCORR_NORMED to compare two images ... I want to make to make this rotation and scale invariant .. any ideas?
I tried to use the same method on the fourier transform of the image and the template , but still the result after rotation is different
Template matching with matchTemplate is not good when your object is rotated or scaled in scene.
You should try openCV function from Features2D Framework. For example SIFT or SURF descriptors, and FLANN matcher. Also, you will need findHomography method.
Here is a good example of finding rotated object in scene.
In short, algorithm is this:
Finding keypoints of your object image
1.1. Extracting descriptors from those keypoints
Finding keypoints of your scene image
2.1 Extracting descriptors from keypoints
Match descriptors by matcher
Analyze your matches
There are different classes of FeatureDetectors, DescriptorExtractors, and DescriptorMatches, you may read about them and choose those, that fit good for your tasks.
openCV FeatureDetector (steps 1 and 2 in algorithm above)
openCV DescriptorExtractor ( steps 1.1 and 2.1 in algorithm
above )
openCV DescriptorMatcher ( step 3 in algorithm above )
Rotation invariant
For each key points:
Take area around key point.
Calculate orientation angle of this area with gradient or another method.
Rotate pattern and request area on this angle to 0.
Calculate descriptors for this rotated areas and match them.
Scale invariant
See BRISK method
There are easier ways of matching a template scale and rotationally invariant than going via feature detection and homographies (if you know its really only rotated and scales, but everything else is constant).
For true object detection the above suggested keypoint based approaches work better.
If you know it's the same template and there is no perspective change involved, you take an image pyramid for scale-space detection, and match your templates on the different levels of that pyramid (via something simple, for example SSD or NCC). It will be cheap to find rough matches on higher (= lower resolution) levels of the pyramid. In fact, it will be so cheap, that you can also rotate your template roughly on the low resolution levels, and when you trace the template back down to the higher resolution levels, you use a more finely grained rotation stepping. That's a pretty standard template matching technique and works well in practice.

stitching aerial images

I am trying to stitch 2 aerial images together with very little overlap, probably <500 px of overlap. These images have 3600x2100 resolution. I am using the OpenCV library to complete this task.
Here is my approach:
1. Find feature points and match points between the two images.
2. Find homography between two images
3. Warp one of the images using the homgraphy
4. Stitch the two images
Right now I am trying to get this to work with two images. I am having trouble with step 3 and possibly step 2. I used findHomography() from the OpenCV library to grab my homography between the two images. Then I called warpPerspective() on one of my images using the homgraphy.
The problem with the approach is that the transformed image is all distorted. Also it seems to only transform a certain part of the image. I have no idea why it is not transforming the whole image.
Can someone give me some advice on how I should approach this problem? Thanks
In the results that you have posted, I can see that you have at least one keypoint mismatch. If you use findHomography(src, dst, 0), it will mess up your homography. You should use findHomography(src, dst, CV_RANSAC) instead.
You can also try to use warpAffine instead of warpPerspective.
Edit: In the results that you posted in the comments to your question, I had the impression that the matching worked quite stable. That means that you should be able to get good results with the example as well. Since you mostly seem to have to deal with translation you could try to filter out the outliers with the following sketched algorithm:
calculate the average (or median) motion vector x_avg
calculate the normalized dot product <x_avg, x_match>
discard x_match if the dot product is smaller than a threshold
To make it work for images with smaller overlap, you would have to look at the detector, descriptors and matches. You do not specify which descriptors you work with, but I would suggest using SIFT or SURF descriptors and the corresponding detectors. You should also set the detector parameters to make a dense sampling (i.e., try to detect more features).
You can refer to this answer which is slightly related: OpenCV - Image Stitching
To stitch images using Homography, the most important thing that should be taken care of is finding of correspondence points in both the images. Lesser the outliers in the correspondence points, the better is the generated homography.
Using robust techniques such as RANSAC along with FindHomography() function of OpenCV(Use CV_RANSAC as option) will still generate reasonable homography provided percentage of inliers is more than percentage of outliers. Also make sure that there are at-least 4 inliers in the correspondence points that passed to the FindHomography function.