Compares a template against overlapped image regions.
cv2.matchTemplate(image, templ, method[, result[, mask]]) → result
cv2.TM_*
): Parameter specifying the comparison method. Options are:
image
is \(W \times H\) and templ
is \(w \times h\), then result
is \((W−w+1) \times (H−h+1)\). This is a grayscale image, where each pixel denotes how much does the neighbourhood of that pixel match with template.templ
. It must either have the same number of channels as template or only one channel, which is then used for all template and image channels. If the data type is cv2.CV_8U
, the mask is interpreted as a binary mask, meaning only elements where mask is nonzero are used and are kept unchanged independent of the actual mask value (weight equals 1). For data tpye cv2.CV_32F
, the mask values are used as weights. The exact formulas are documented in TemplateMatchModes.
The function slides through image, compares the overlapped patches of size \(w \times h\) against templ
using the specified method and stores the comparison results in result
. TemplateMatchModes describes the formulae for the available comparison methods (\(I\) denotes image, \(T\) template, \(R\) result, \(M\) the optional mask). The summation is done over template and/or the image patch: \(x'=0...w−1,y'=0...h−1\).
After the function finishes the comparison, the best matches can be found as global minimums (when TM_SQDIFF was used) or maximums (when TM_CCORR or TM_CCOEFF was used) using the minMaxLoc function. In case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels and separate mean values are used for each channel. That is, the function can take a color template and a color image. The result will still be a single-channel image, which is easier to analyze.