Imgproc
ObjectiveC
@interface Imgproc : NSObject
Swift
class Imgproc : NSObject
The Imgproc module
Member classes: GeneralizedHough
, GeneralizedHoughBallard
, GeneralizedHoughGuil
, CLAHE
, Subdiv2D
, LineSegmentDetector
Member enums: SmoothMethod_c
, MorphShapes_c
, SpecialFilter
, MorphTypes
, MorphShapes
, InterpolationFlags
, WarpPolarMode
, InterpolationMasks
, DistanceTypes
, DistanceTransformMasks
, ThresholdTypes
, AdaptiveThresholdTypes
, GrabCutClasses
, GrabCutModes
, DistanceTransformLabelTypes
, FloodFillFlags
, ConnectedComponentsTypes
, ConnectedComponentsAlgorithmsTypes
, RetrievalModes
, ContourApproximationModes
, ShapeMatchModes
, HoughModes
, LineSegmentDetectorModes
, HistCompMethods
, ColorConversionCodes
, RectanglesIntersectTypes
, LineTypes
, HersheyFonts
, MarkerTypes
, TemplateMatchModes
, ColormapTypes

Declaration
ObjectiveC
@property (class, readonly) int CV_GAUSSIAN_5x5
Swift
class var CV_GAUSSIAN_5x5: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_SCHARR
Swift
class var CV_SCHARR: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_MAX_SOBEL_KSIZE
Swift
class var CV_MAX_SOBEL_KSIZE: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_RGBA2mRGBA
Swift
class var CV_RGBA2mRGBA: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_mRGBA2RGBA
Swift
class var CV_mRGBA2RGBA: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_WARP_FILL_OUTLIERS
Swift
class var CV_WARP_FILL_OUTLIERS: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_WARP_INVERSE_MAP
Swift
class var CV_WARP_INVERSE_MAP: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_CHAIN_CODE
Swift
class var CV_CHAIN_CODE: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_LINK_RUNS
Swift
class var CV_LINK_RUNS: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_POLY_APPROX_DP
Swift
class var CV_POLY_APPROX_DP: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_CLOCKWISE
Swift
class var CV_CLOCKWISE: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_COUNTER_CLOCKWISE
Swift
class var CV_COUNTER_CLOCKWISE: Int32 { get }

Declaration
ObjectiveC
@property (class, readonly) int CV_CANNY_L2_GRADIENT
Swift
class var CV_CANNY_L2_GRADIENT: Int32 { get }

Returns Gabor filter coefficients.
For more details about gabor filter equations and parameters, see: Gabor Filter.
Declaration
Parameters
ksize
Size of the filter returned.
sigma
Standard deviation of the gaussian envelope.
theta
Orientation of the normal to the parallel stripes of a Gabor function.
lambd
Wavelength of the sinusoidal factor.
gamma
Spatial aspect ratio.
psi
Phase offset.
ktype
Type of filter coefficients. It can be CV_32F or CV_64F .

Returns Gabor filter coefficients.
For more details about gabor filter equations and parameters, see: Gabor Filter.
Declaration
Parameters
ksize
Size of the filter returned.
sigma
Standard deviation of the gaussian envelope.
theta
Orientation of the normal to the parallel stripes of a Gabor function.
lambd
Wavelength of the sinusoidal factor.
gamma
Spatial aspect ratio.
psi
Phase offset.

Returns Gabor filter coefficients.
For more details about gabor filter equations and parameters, see: Gabor Filter.
Declaration
Parameters
ksize
Size of the filter returned.
sigma
Standard deviation of the gaussian envelope.
theta
Orientation of the normal to the parallel stripes of a Gabor function.
lambd
Wavelength of the sinusoidal factor.
gamma
Spatial aspect ratio.

Returns Gaussian filter coefficients.
The function computes and returns the
\texttt{ksize} \times 1matrix of Gaussian filter coefficients:G_i= \alpha *e^{(i( \texttt{ksize} 1)/2)^2/(2* \texttt{sigma}^2)},where
i=0..\texttt{ksize}1and\alphais the scale factor chosen so that\sum_i G_i=1.Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higherlevel GaussianBlur.
Declaration
ObjectiveC
+ (nonnull Mat *)getGaussianKernel:(int)ksize sigma:(double)sigma ktype:(int)ktype;
Swift
class func getGaussianKernel(ksize: Int32, sigma: Double, ktype: Int32) > Mat
Parameters
ksize
Aperture size. It should be odd (
\texttt{ksize} \mod 2 = 1) and positive.sigma
Gaussian standard deviation. If it is nonpositive, it is computed from ksize as
sigma = 0.3*((ksize1)*0.5  1) + 0.8
.ktype
Type of filter coefficients. It can be CV_32F or CV_64F .

Returns Gaussian filter coefficients.
The function computes and returns the
\texttt{ksize} \times 1matrix of Gaussian filter coefficients:G_i= \alpha *e^{(i( \texttt{ksize} 1)/2)^2/(2* \texttt{sigma}^2)},where
i=0..\texttt{ksize}1and\alphais the scale factor chosen so that\sum_i G_i=1.Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higherlevel GaussianBlur.
Declaration
ObjectiveC
+ (nonnull Mat *)getGaussianKernel:(int)ksize sigma:(double)sigma;
Swift
class func getGaussianKernel(ksize: Int32, sigma: Double) > Mat
Parameters
ksize
Aperture size. It should be odd (
\texttt{ksize} \mod 2 = 1) and positive.sigma
Gaussian standard deviation. If it is nonpositive, it is computed from ksize as
sigma = 0.3*((ksize1)*0.5  1) + 0.8
. 
Calculates a perspective transform from four pairs of the corresponding points.
The function calculates the
3 \times 3matrix of a perspective transform so that:\begin{bmatrix} t_i x’_i \ t_i y’_i \ t_i \end{bmatrix} = \texttt{map\_matrix} \cdot \begin{bmatrix} x_i \ y_i \ 1 \end{bmatrix}where
dst(i)=(x’_i,y’_i), src(i)=(x_i, y_i), i=0,1,2,3See
findHomography
,+warpPerspective:dst:M:dsize:flags:borderMode:borderValue:
,perspectiveTransform

Calculates a perspective transform from four pairs of the corresponding points.
The function calculates the
3 \times 3matrix of a perspective transform so that:\begin{bmatrix} t_i x’_i \ t_i y’_i \ t_i \end{bmatrix} = \texttt{map\_matrix} \cdot \begin{bmatrix} x_i \ y_i \ 1 \end{bmatrix}where
dst(i)=(x’_i,y’_i), src(i)=(x_i, y_i), i=0,1,2,3See
findHomography
,+warpPerspective:dst:M:dsize:flags:borderMode:borderValue:
,perspectiveTransform

Calculates an affine matrix of 2D rotation.
The function calculates the following matrix:
\begin{bmatrix} \alpha & \beta & (1 \alpha ) \cdot \texttt{center.x}  \beta \cdot \texttt{center.y} \  \beta & \alpha & \beta \cdot \texttt{center.x} + (1 \alpha ) \cdot \texttt{center.y} \end{bmatrix}where
\begin{array}{l} \alpha = \texttt{scale} \cdot \cos \texttt{angle} , \ \beta = \texttt{scale} \cdot \sin \texttt{angle} \end{array}The transformation maps the rotation center to itself. If this is not the target, adjust the shift.

Returns a structuring element of the specified size and shape for morphological operations.
The function constructs and returns the structuring element that can be further passed to #erode, #dilate or #morphologyEx. But you can also construct an arbitrary binary mask yourself and use it as the structuring element.
Declaration
ObjectiveC
+ (nonnull Mat *)getStructuringElement:(MorphShapes)shape ksize:(nonnull Size2i *)ksize anchor:(nonnull Point2i *)anchor;
Swift
class func getStructuringElement(shape: MorphShapes, ksize: Size2i, anchor: Point2i) > Mat
Parameters
shape
Element shape that could be one of #MorphShapes
ksize
Size of the structuring element.
anchor
Anchor position within the element. The default value
(1, 1)means that the anchor is at the center. Note that only the shape of a crossshaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted. 
Returns a structuring element of the specified size and shape for morphological operations.
The function constructs and returns the structuring element that can be further passed to #erode, #dilate or #morphologyEx. But you can also construct an arbitrary binary mask yourself and use it as the structuring element.
Declaration
ObjectiveC
+ (nonnull Mat *)getStructuringElement:(MorphShapes)shape ksize:(nonnull Size2i *)ksize;
Swift
class func getStructuringElement(shape: MorphShapes, ksize: Size2i) > Mat
Parameters
shape
Element shape that could be one of #MorphShapes
ksize
Size of the structuring element. anchor is at the center. Note that only the shape of a crossshaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.

Calculates all of the moments up to the third order of a polygon or rasterized shape.
The function computes moments, up to the 3rd order, of a vector shape or a rasterized shape. The results are returned in the structure cv::Moments.
Note
Only applicable to contour moments calculations from Python bindings: Note that the numpy type for the input array should be either np.int32 or np.float32.
Declaration
Parameters
array
Raster image (singlechannel, 8bit or floatingpoint 2D array) or an array (
1 \times NorN \times 1) of 2D points (Point or Point2f ).binaryImage
If it is true, all nonzero image pixels are treated as 1’s. The parameter is used for images only.
Return Value
moments.

Calculates all of the moments up to the third order of a polygon or rasterized shape.
The function computes moments, up to the 3rd order, of a vector shape or a rasterized shape. The results are returned in the structure cv::Moments.
Note
Only applicable to contour moments calculations from Python bindings: Note that the numpy type for the input array should be either np.int32 or np.float32.
Declaration
Parameters
array
Raster image (singlechannel, 8bit or floatingpoint 2D array) or an array (
1 \times NorN \times 1) of 2D points (Point or Point2f ). used for images only.Return Value
moments.

The function is used to detect translational shifts that occur between two images.
The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation
Calculates the crosspower spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.
The function performs the following equations:
 First it applies a Hanning window (see http://en.wikipedia.org/wiki/Hann_function) to each image to remove possible edge effects. This window is cached until the array size changes to speed up processing time.
 Next it computes the forward DFTs of each source array:
\mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}where\mathcal{F}is the forward DFT.
 It then computes the crosspower spectrum of each frequency domain array:
R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{\mathbf{G}_a \mathbf{G}_b^*}
 Next the crosscorrelation is converted back into the time domain via the inverse DFT:
r = \mathcal{F}^{1}\{R\}
 Finally, it computes the peak location and computes a 5x5 weighted centroid around the peak to
achieve subpixel accuracy.
(\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}
If nonzero, the response parameter is computed as the sum of the elements of r within the 5x5 centroid around the peak location. It is normalized to a maximum of 1 (meaning there is a single peak) and will be smaller when there are multiple peaks.
See
dft
,getOptimalDFTSize
,idft
,mulSpectrums createHanningWindow
Declaration
Parameters
src1
Source floating point array (CV_32FC1 or CV_64FC1)
src2
Source floating point array (CV_32FC1 or CV_64FC1)
window
Floating point array with windowing coefficients to reduce edge effects (optional).
response
Signal power within the 5x5 centroid around the peak, between 0 and 1 (optional).
Return Value
detected phase shift (subpixel) between the two arrays.

The function is used to detect translational shifts that occur between two images.
The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation
Calculates the crosspower spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.
The function performs the following equations:
 First it applies a Hanning window (see http://en.wikipedia.org/wiki/Hann_function) to each image to remove possible edge effects. This window is cached until the array size changes to speed up processing time.
 Next it computes the forward DFTs of each source array:
\mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}where\mathcal{F}is the forward DFT.
 It then computes the crosspower spectrum of each frequency domain array:
R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{\mathbf{G}_a \mathbf{G}_b^*}
 Next the crosscorrelation is converted back into the time domain via the inverse DFT:
r = \mathcal{F}^{1}\{R\}
 Finally, it computes the peak location and computes a 5x5 weighted centroid around the peak to
achieve subpixel accuracy.
(\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}
If nonzero, the response parameter is computed as the sum of the elements of r within the 5x5 centroid around the peak location. It is normalized to a maximum of 1 (meaning there is a single peak) and will be smaller when there are multiple peaks.
See
dft
,getOptimalDFTSize
,idft
,mulSpectrums createHanningWindow
Declaration
Parameters
src1
Source floating point array (CV_32FC1 or CV_64FC1)
src2
Source floating point array (CV_32FC1 or CV_64FC1)
window
Floating point array with windowing coefficients to reduce edge effects (optional).
Return Value
detected phase shift (subpixel) between the two arrays.

The function is used to detect translational shifts that occur between two images.
The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation
Calculates the crosspower spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.
The function performs the following equations:
 First it applies a Hanning window (see http://en.wikipedia.org/wiki/Hann_function) to each image to remove possible edge effects. This window is cached until the array size changes to speed up processing time.
 Next it computes the forward DFTs of each source array:
\mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}where\mathcal{F}is the forward DFT.
 It then computes the crosspower spectrum of each frequency domain array:
R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{\mathbf{G}_a \mathbf{G}_b^*}
 Next the crosscorrelation is converted back into the time domain via the inverse DFT:
r = \mathcal{F}^{1}\{R\}
 Finally, it computes the peak location and computes a 5x5 weighted centroid around the peak to
achieve subpixel accuracy.
(\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}
If nonzero, the response parameter is computed as the sum of the elements of r within the 5x5 centroid around the peak location. It is normalized to a maximum of 1 (meaning there is a single peak) and will be smaller when there are multiple peaks.
See
dft
,getOptimalDFTSize
,idft
,mulSpectrums createHanningWindow
Declaration
Parameters
src1
Source floating point array (CV_32FC1 or CV_64FC1)
src2
Source floating point array (CV_32FC1 or CV_64FC1)
Return Value
detected phase shift (subpixel) between the two arrays.

Creates a smart pointer to a cv::CLAHE class and initializes it.
Declaration
Parameters
clipLimit
Threshold for contrast limiting.
tileGridSize
Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

Creates a smart pointer to a cv::CLAHE class and initializes it.
Declaration
ObjectiveC
+ (nonnull CLAHE *)createCLAHE:(double)clipLimit;
Swift
class func createCLAHE(clipLimit: Double) > CLAHE
Parameters
clipLimit
Threshold for contrast limiting. equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

Creates a smart pointer to a cv::GeneralizedHoughBallard class and initializes it.
Declaration
ObjectiveC
+ (nonnull GeneralizedHoughBallard *)createGeneralizedHoughBallard;
Swift
class func createGeneralizedHoughBallard() > GeneralizedHoughBallard

Creates a smart pointer to a cv::GeneralizedHoughGuil class and initializes it.
Declaration
ObjectiveC
+ (nonnull GeneralizedHoughGuil *)createGeneralizedHoughGuil;
Swift
class func createGeneralizedHoughGuil() > GeneralizedHoughGuil

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale _sigma_scale:(double)_sigma_scale _quant:(double)_quant _ang_th:(double)_ang_th _log_eps:(double)_log_eps _density_th:(double)_density_th _n_bins:(int)_n_bins;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double, _log_eps: Double, _density_th: Double, _n_bins: Int32) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1].
_sigma_scale
Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.
_quant
Bound to the quantization error on the gradient norm.
_ang_th
Gradient angle tolerance in degrees.
_log_eps
Detection threshold: log10(NFA) > log_eps. Used only when advance refinement is chosen.
_density_th
Minimal density of aligned region points in the enclosing rectangle.
_n_bins
Number of bins in pseudoordering of gradient modulus.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale _sigma_scale:(double)_sigma_scale _quant:(double)_quant _ang_th:(double)_ang_th _log_eps:(double)_log_eps _density_th:(double)_density_th;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double, _log_eps: Double, _density_th: Double) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1].
_sigma_scale
Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.
_quant
Bound to the quantization error on the gradient norm.
_ang_th
Gradient angle tolerance in degrees.
_log_eps
Detection threshold: log10(NFA) > log_eps. Used only when advance refinement is chosen.
_density_th
Minimal density of aligned region points in the enclosing rectangle.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale _sigma_scale:(double)_sigma_scale _quant:(double)_quant _ang_th:(double)_ang_th _log_eps:(double)_log_eps;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double, _log_eps: Double) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1].
_sigma_scale
Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.
_quant
Bound to the quantization error on the gradient norm.
_ang_th
Gradient angle tolerance in degrees.
_log_eps
Detection threshold: log10(NFA) > log_eps. Used only when advance refinement is chosen.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale _sigma_scale:(double)_sigma_scale _quant:(double)_quant _ang_th:(double)_ang_th;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1].
_sigma_scale
Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.
_quant
Bound to the quantization error on the gradient norm.
_ang_th
Gradient angle tolerance in degrees. is chosen.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale _sigma_scale:(double)_sigma_scale _quant:(double)_quant;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1].
_sigma_scale
Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.
_quant
Bound to the quantization error on the gradient norm. is chosen.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale _sigma_scale:(double)_sigma_scale;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1].
_sigma_scale
Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale. is chosen.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine _scale:(double)_scale;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes
_scale
The scale of the image that will be used to find the lines. Range (0..1]. is chosen.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
Note
Implementation has been removed due original code license conflict
Declaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector: (LineSegmentDetectorModes)_refine;
Swift
class func createLineSegmentDetector(_refine: LineSegmentDetectorModes) > LineSegmentDetector
Parameters
_refine
The way found lines will be refined, see #LineSegmentDetectorModes is chosen.

Creates a smart pointer to a LineSegmentDetector object and initializes it.
The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.
is chosen.
Note
Implementation has been removed due original code license conflictDeclaration
ObjectiveC
+ (nonnull LineSegmentDetector *)createLineSegmentDetector;
Swift
class func createLineSegmentDetector() > LineSegmentDetector

Calculates the upright bounding rectangle of a point set or nonzero pixels of grayscale image.
The function calculates and returns the minimal upright bounding rectangle for the specified point set or nonzero pixels of grayscale image.
Declaration
Parameters
array
Input grayscale image or 2D point set, stored in std::vector or Mat.

Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits (in a leastsquares sense) a set of 2D points best of all. It returns the rotated rectangle in which the ellipse is inscribed. The first algorithm described by CITE: Fitzgibbon95 is used. Developer should keep in mind that it is possible that the returned ellipse/rotatedRect data contains negative indices, due to the data points being close to the border of the containing Mat element.
Declaration
ObjectiveC
+ (nonnull RotatedRect *)fitEllipse:(nonnull NSArray<Point2f *> *)points;
Swift
class func fitEllipse(points: [Point2f]) > RotatedRect
Parameters
points
Input 2D point set, stored in std::vector<> or Mat

Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square (AMS) proposed by CITE: Taubin1991 is used.
For an ellipse, this basis set is
\chi= \left(x^2, x y, y^2, x, y, 1\right), which is a set of six free coefficientsA^T=\left\{A_{\text{xx}},A_{\text{xy}},A_{\text{yy}},A_x,A_y,A_0\right\}. However, to specify an ellipse, all that is needed is five numbers; the major and minor axes lengths(a,b), the position(x_0,y_0), and the orientation\theta. This is because the basis set includes lines, quadratics, parabolic and hyperbolic functions as well as elliptical functions as possible fits. If the fit is found to be a parabolic or hyperbolic function then the standard #fitEllipse method is used. The AMS method restricts the fit to parabolic, hyperbolic and elliptical curves by imposing the condition thatA^T ( D_x^T D_x + D_y^T D_y) A = 1where the matricesDxandDyare the partial derivatives of the design matrixDwith respect to x and y. The matrices are formed row by row applying the following to each of the points in the set:\begin{aligned} D(i,:)&=\left\{x_i^2, x_i y_i, y_i^2, x_i, y_i, 1\right\} & D_x(i,:)&=\left\{2 x_i,y_i,0,1,0,0\right\} & D_y(i,:)&=\left\{0,x_i,2 y_i,0,1,0\right\} \end{aligned}The AMS method minimizes the cost function\begin{aligned} \epsilon ^2=\frac{ A^T D^T D A }{ A^T (D_x^T D_x + D_y^T D_y) A^T } \end{aligned}The minimum cost is found by solving the generalized eigenvalue problem.
\begin{aligned} D^T D A = \lambda \left( D_x^T D_x + D_y^T D_y\right) A \end{aligned}Declaration
ObjectiveC
+ (nonnull RotatedRect *)fitEllipseAMS:(nonnull Mat *)points;
Swift
class func fitEllipseAMS(points: Mat) > RotatedRect
Parameters
points
Input 2D point set, stored in std::vector<> or Mat

Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square (Direct) method by CITE: Fitzgibbon1999 is used.
For an ellipse, this basis set is
\chi= \left(x^2, x y, y^2, x, y, 1\right), which is a set of six free coefficientsA^T=\left\{A_{\text{xx}},A_{\text{xy}},A_{\text{yy}},A_x,A_y,A_0\right\}. However, to specify an ellipse, all that is needed is five numbers; the major and minor axes lengths(a,b), the position(x_0,y_0), and the orientation\theta. This is because the basis set includes lines, quadratics, parabolic and hyperbolic functions as well as elliptical functions as possible fits. The Direct method confines the fit to ellipses by ensuring that4 A_{xx} A_{yy} A_{xy}^2 > 0. The condition imposed is that4 A_{xx} A_{yy} A_{xy}^2=1which satisfies the inequality and as the coefficients can be arbitrarily scaled is not overly restrictive.\begin{aligned} \epsilon ^2= A^T D^T D A \quad \text{with} \quad A^T C A =1 \quad \text{and} \quad C=\left(\begin{matrix} 0 & 0 & 2 & 0 & 0 & 0 \ 0 & 1 & 0 & 0 & 0 & 0 \ 2 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 \end{matrix} \right) \end{aligned}The minimum cost is found by solving the generalized eigenvalue problem.
\begin{aligned} D^T D A = \lambda \left( C\right) A \end{aligned}The system produces only one positive eigenvalue
\lambdawhich is chosen as the solution with its eigenvector\mathbf{u}. These are used to find the coefficients\begin{aligned} A = \sqrt{\frac{1}{\mathbf{u}^T C \mathbf{u}}} \mathbf{u} \end{aligned}The scaling factor guarantees thatA^T C A =1.Declaration
ObjectiveC
+ (nonnull RotatedRect *)fitEllipseDirect:(nonnull Mat *)points;
Swift
class func fitEllipseDirect(points: Mat) > RotatedRect
Parameters
points
Input 2D point set, stored in std::vector<> or Mat

Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
The function calculates and returns the minimumarea bounding rectangle (possibly rotated) for a specified point set. Developer should keep in mind that the returned RotatedRect can contain negative indices when data is close to the containing Mat element boundary.
Declaration
ObjectiveC
+ (nonnull RotatedRect *)minAreaRect:(nonnull NSArray<Point2f *> *)points;
Swift
class func minAreaRect(points: [Point2f]) > RotatedRect
Parameters
points
Input vector of 2D points, stored in std::vector<> or Mat

Calculates the width and height of a text string.
The function cv::getTextSize calculates and returns the size of a box that contains the specified text. That is, the following code renders some text, the tight box surrounding it, and the baseline: :
String text = "Funny text inside the box"; int fontFace = FONT_HERSHEY_SCRIPT_SIMPLEX; double fontScale = 2; int thickness = 3; Mat img(600, 800, CV_8UC3, Scalar::all(0)); int baseline=0; Size textSize = getTextSize(text, fontFace, fontScale, thickness, &baseline); baseline += thickness; // center the text Point textOrg((img.cols  textSize.width)/2, (img.rows + textSize.height)/2); // draw the box rectangle(img, textOrg + Point(0, baseline), textOrg + Point(textSize.width, textSize.height), Scalar(0,0,255)); // ... and the baseline first line(img, textOrg + Point(0, thickness), textOrg + Point(textSize.width, thickness), Scalar(0, 0, 255)); // then put the text itself putText(img, text, textOrg, fontFace, fontScale, Scalar::all(255), thickness, 8);
Declaration
ObjectiveC
+ (nonnull Size2i *)getTextSize:(nonnull NSString *)text fontFace:(HersheyFonts)fontFace fontScale:(double)fontScale thickness:(int)thickness baseLine:(nonnull int *)baseLine;
Swift
class func getTextSize(text: String, fontFace: HersheyFonts, fontScale: Double, thickness: Int32, baseLine: UnsafeMutablePointer<Int32>) > Size2i
Parameters
text
Input text string.
fontFace
Font to use, see #HersheyFonts.
fontScale
Font scale factor that is multiplied by the fontspecific base size.
thickness
Thickness of lines used to render the text. See #putText for details.
baseLine
ycoordinate of the baseline relative to the bottommost text point.
Return Value
The size of a box that contains the specified text.

Tests a contour convexity.
The function tests whether the input contour is convex or not. The contour must be simple, that is, without selfintersections. Otherwise, the function output is undefined.
Declaration
ObjectiveC
+ (BOOL)isContourConvex:(nonnull NSArray<Point2i *> *)contour;
Swift
class func isContourConvex(contour: [Point2i]) > Bool
Parameters
contour
Input vector of 2D points, stored in std::vector<> or Mat

Calculates a contour perimeter or a curve length.
The function computes a curve length or a closed contour perimeter.
Declaration
ObjectiveC
+ (double)arcLength:(nonnull NSArray<Point2f *> *)curve closed:(BOOL)closed;
Swift
class func arcLength(curve: [Point2f], closed: Bool) > Double
Parameters
curve
Input vector of 2D points, stored in std::vector or Mat.
closed
Flag indicating whether the curve is closed or not.

Compares two histograms.
The function cv::compareHist compares two dense or two sparse histograms using the specified method.
The function returns
d(H_1, H_2).While the function works well with 1, 2, 3dimensional dense histograms, it may not be suitable for highdimensional sparse histograms. In such histograms, because of aliasing and sampling problems, the coordinates of nonzero histogram bins can slightly shift. To compare such histograms or more general sparse configurations of weighted points, consider using the #EMD function.
Declaration
ObjectiveC
+ (double)compareHist:(nonnull Mat *)H1 H2:(nonnull Mat *)H2 method:(HistCompMethods)method;
Swift
class func compareHist(H1: Mat, H2: Mat, method: HistCompMethods) > Double
Parameters
H1
First compared histogram.
H2
Second compared histogram of the same size as H1 .
method
Comparison method, see #HistCompMethods

Calculates a contour area.
The function computes a contour area. Similarly to moments , the area is computed using the Green formula. Thus, the returned area and the number of nonzero pixels, if you draw the contour using #drawContours or #fillPoly , can be different. Also, the function will most certainly give a wrong results for contours with selfintersections.
Example:
vector<Point> contour; contour.push_back(Point2f(0, 0)); contour.push_back(Point2f(10, 0)); contour.push_back(Point2f(10, 10)); contour.push_back(Point2f(5, 4)); double area0 = contourArea(contour); vector<Point> approx; approxPolyDP(contour, approx, 5, true); double area1 = contourArea(approx); cout << "area0 =" << area0 << endl << "area1 =" << area1 << endl << "approx poly vertices" << approx.size() << endl;
Declaration
ObjectiveC
+ (double)contourArea:(nonnull Mat *)contour oriented:(BOOL)oriented;
Swift
class func contourArea(contour: Mat, oriented: Bool) > Double
Parameters
contour
Input vector of 2D points (contour vertices), stored in std::vector or Mat.
oriented
Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counterclockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.

Calculates a contour area.
The function computes a contour area. Similarly to moments , the area is computed using the Green formula. Thus, the returned area and the number of nonzero pixels, if you draw the contour using #drawContours or #fillPoly , can be different. Also, the function will most certainly give a wrong results for contours with selfintersections.
Example:
vector<Point> contour; contour.push_back(Point2f(0, 0)); contour.push_back(Point2f(10, 0)); contour.push_back(Point2f(10, 10)); contour.push_back(Point2f(5, 4)); double area0 = contourArea(contour); vector<Point> approx; approxPolyDP(contour, approx, 5, true); double area1 = contourArea(approx); cout << "area0 =" << area0 << endl << "area1 =" << area1 << endl << "approx poly vertices" << approx.size() << endl;
Declaration
ObjectiveC
+ (double)contourArea:(nonnull Mat *)contour;
Swift
class func contourArea(contour: Mat) > Double
Parameters
contour
Input vector of 2D points (contour vertices), stored in std::vector or Mat. depending on the contour orientation (clockwise or counterclockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.

Calculates the fontspecific size to use to achieve a given height in pixels.
See
cv::putText
Declaration
ObjectiveC
+ (double)getFontScaleFromHeight:(int)fontFace pixelHeight:(int)pixelHeight thickness:(int)thickness;
Swift
class func getFontScaleFromHeight(fontFace: Int32, pixelHeight: Int32, thickness: Int32) > Double
Parameters
fontFace
Font to use, see cv::HersheyFonts.
pixelHeight
Pixel height to compute the fontScale for
thickness
Thickness of lines used to render the text.See putText for details.
Return Value
The fontSize to use for cv::putText

Calculates the fontspecific size to use to achieve a given height in pixels.
See
cv::putText
Declaration
ObjectiveC
+ (double)getFontScaleFromHeight:(int)fontFace pixelHeight:(int)pixelHeight;
Swift
class func getFontScaleFromHeight(fontFace: Int32, pixelHeight: Int32) > Double
Parameters
fontFace
Font to use, see cv::HersheyFonts.
pixelHeight
Pixel height to compute the fontScale for
Return Value
The fontSize to use for cv::putText

Compares two shapes.
The function compares two shapes. All three implemented methods use the Hu invariants (see #HuMoments)
Declaration
ObjectiveC
+ (double)matchShapes:(nonnull Mat *)contour1 contour2:(nonnull Mat *)contour2 method:(ShapeMatchModes)method parameter:(double)parameter;
Swift
class func matchShapes(contour1: Mat, contour2: Mat, method: ShapeMatchModes, parameter: Double) > Double
Parameters
contour1
First contour or grayscale image.
contour2
Second contour or grayscale image.
method
Comparison method, see #ShapeMatchModes
parameter
Methodspecific parameter (not supported now).

Finds a triangle of minimum area enclosing a 2D point set and returns its area.
The function finds a triangle of minimum area enclosing the given set of 2D points and returns its area. The output for a given 2D point set is shown in the image below. 2D points are depicted in red* and the enclosing triangle in yellow.
The implementation of the algorithm is based on O'Rourke’s CITE: ORourke86 and Klee and Laskowski’s CITE: KleeLaskowski85 papers. O'Rourke provides a
\theta(n)algorithm for finding the minimal enclosing triangle of a 2D convex polygon with n vertices. Since the #minEnclosingTriangle function takes a 2D point set as input an additional preprocessing step of computing the convex hull of the 2D point set is required. The complexity of the #convexHull function isO(n log(n))which is higher than\theta(n). Thus the overall complexity of the function isO(n log(n)).Declaration
Parameters
points
Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector<> or Mat
triangle
Output vector of three 2D points defining the vertices of the triangle. The depth of the OutputArray must be CV_32F.

Performs a pointincontour test.
The function determines whether the point is inside a contour, outside, or lies on an edge (or coincides with a vertex). It returns positive (inside), negative (outside), or zero (on an edge) value, correspondingly. When measureDist=false , the return value is +1, 1, and 0, respectively. Otherwise, the return value is a signed distance between the point and the nearest contour edge.
See below a sample output of the function where each image pixel is tested against the contour:
Declaration
Parameters
contour
Input contour.
pt
Point tested against the contour.
measureDist
If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not.

Applies a fixedlevel threshold to each array element.
The function applies fixedlevel thresholding to a multiplechannel array. The function is typically used to get a bilevel (binary) image out of a grayscale image ( #compare could be also used for this purpose) or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function. They are determined by type parameter.
Also, the special values #THRESH_OTSU or #THRESH_TRIANGLE may be combined with one of the above values. In these cases, the function determines the optimal threshold value using the Otsu’s or Triangle algorithm and uses it instead of the specified thresh.
Note
Currently, the Otsu’s and Triangle methods are implemented only for 8bit singlechannel images.
Declaration
ObjectiveC
+ (double)threshold:(nonnull Mat *)src dst:(nonnull Mat *)dst thresh:(double)thresh maxval:(double)maxval type:(ThresholdTypes)type;
Swift
class func threshold(src: Mat, dst: Mat, thresh: Double, maxval: Double, type: ThresholdTypes) > Double
Parameters
src
input array (multiplechannel, 8bit or 32bit floating point).
dst
output array of the same size and type and the same number of channels as src.
thresh
threshold value.
maxval
maximum value to use with the #THRESH_BINARY and #THRESH_BINARY_INV thresholding types.
type
thresholding type (see #ThresholdTypes).
Return Value
the computed threshold value if Otsu’s or Triangle methods used.

Finds intersection of two convex polygons
Note
intersectConvexConvex doesn’t confirm that both polygons are convex and will return invalid results if they aren’t.
Declaration
Parameters
_p1
First polygon
_p2
Second polygon
_p12
Output polygon describing the intersecting area
handleNested
When true, an intersection is found if one of the polygons is fully enclosed in the other. When false, no intersection is found. If the polygons share a side or the vertex of one polygon lies on an edge of the other, they are not considered nested and an intersection will be found regardless of the value of handleNested.
Return Value
Absolute value of area of intersecting polygon

Finds intersection of two convex polygons
Note
intersectConvexConvex doesn’t confirm that both polygons are convex and will return invalid results if they aren’t.
Declaration
Parameters
_p1
First polygon
_p2
Second polygon
_p12
Output polygon describing the intersecting area When false, no intersection is found. If the polygons share a side or the vertex of one polygon lies on an edge of the other, they are not considered nested and an intersection will be found regardless of the value of handleNested.
Return Value
Absolute value of area of intersecting polygon

Computes the “minimal work” distance between two weighted point configurations.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in CITE: RubnerSept98, CITE: Rubner2000 is multidimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using lineartime algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
Declaration
ObjectiveC
+ (float)EMD:(nonnull Mat *)signature1 signature2:(nonnull Mat *)signature2 distType:(DistanceTypes)distType cost:(nonnull Mat *)cost flow:(nonnull Mat *)flow;
Swift
class func wrapperEMD(signature1: Mat, signature2: Mat, distType: DistanceTypes, cost: Mat, flow: Mat) > Float
Parameters
signature1
First signature, a
\texttt{size1}\times \texttt{dims}+1floatingpoint matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the userdefined cost matrix is used. The weights must be nonnegative and have at least one nonzero value.signature2
Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra “dummy” point is added to either signature1 or signature2. The weights must be nonnegative and have at least one nonzero value.
distType
Used metric. See #DistanceTypes.
cost
Userdefined
\texttt{size1}\times \texttt{size2}cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.lowerBound
Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the userdefined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You must* initialize *lowerBound . If the calculated distance between mass centers is greater or equal to *lowerBound (it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0.
flow
Resultant
\texttt{size1} \times \texttt{size2}flow matrix:\texttt{flow}_{i,j}is a flow fromith point of signature1 tojth point of signature2 . 
Computes the “minimal work” distance between two weighted point configurations.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in CITE: RubnerSept98, CITE: Rubner2000 is multidimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using lineartime algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
Declaration
ObjectiveC
+ (float)EMD:(nonnull Mat *)signature1 signature2:(nonnull Mat *)signature2 distType:(DistanceTypes)distType cost:(nonnull Mat *)cost;
Swift
class func wrapperEMD(signature1: Mat, signature2: Mat, distType: DistanceTypes, cost: Mat) > Float
Parameters
signature1
First signature, a
\texttt{size1}\times \texttt{dims}+1floatingpoint matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the userdefined cost matrix is used. The weights must be nonnegative and have at least one nonzero value.signature2
Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra “dummy” point is added to either signature1 or signature2. The weights must be nonnegative and have at least one nonzero value.
distType
Used metric. See #DistanceTypes.
cost
Userdefined
\texttt{size1}\times \texttt{size2}cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.lowerBound
Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the userdefined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You must* initialize *lowerBound . If the calculated distance between mass centers is greater or equal to *lowerBound (it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0. a flow from
ith point of signature1 tojth point of signature2 . 
Computes the “minimal work” distance between two weighted point configurations.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in CITE: RubnerSept98, CITE: Rubner2000 is multidimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using lineartime algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
Declaration
ObjectiveC
+ (float)EMD:(nonnull Mat *)signature1 signature2:(nonnull Mat *)signature2 distType:(DistanceTypes)distType;
Swift
class func wrapperEMD(signature1: Mat, signature2: Mat, distType: DistanceTypes) > Float
Parameters
signature1
First signature, a
\texttt{size1}\times \texttt{dims}+1floatingpoint matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the userdefined cost matrix is used. The weights must be nonnegative and have at least one nonzero value.signature2
Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra “dummy” point is added to either signature1 or signature2. The weights must be nonnegative and have at least one nonzero value.
distType
Used metric. See #DistanceTypes. is used, lower boundary lowerBound cannot be calculated because it needs a metric function. signatures that is a distance between mass centers. The lower boundary may not be calculated if the userdefined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You must* initialize *lowerBound . If the calculated distance between mass centers is greater or equal to *lowerBound (it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0. a flow from
ith point of signature1 tojth point of signature2 . 
computes the connected components labeled image of boolean image
image with 4 or 8 way connectivity  returns N, the total number of labels [0, N1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently Grana (BBDT) and Wu’s (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu’s algorithms if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.
Declaration
Parameters
image
the 8bit singlechannel image to be labeled
labels
destination labeled image
connectivity
8 or 4 for 8way or 4way connectivity respectively
ltype
output image label type. Currently CV_32S and CV_16U are supported.
ccltype
connected components algorithm type (see the #ConnectedComponentsAlgorithmsTypes).

Declaration
Parameters
image
the 8bit singlechannel image to be labeled
labels
destination labeled image
connectivity
8 or 4 for 8way or 4way connectivity respectively
ltype
output image label type. Currently CV_32S and CV_16U are supported.

computes the connected components labeled image of boolean image and also produces a statistics output for each label
image with 4 or 8 way connectivity  returns N, the total number of labels [0, N1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently Grana’s (BBDT) and Wu’s (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu’s algorithms (statistics included) if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.
Declaration
ObjectiveC
+ (int) connectedComponentsWithStatsWithAlgorithm:(nonnull Mat *)image labels:(nonnull Mat *)labels stats:(nonnull Mat *)stats centroids:(nonnull Mat *)centroids connectivity:(int)connectivity ltype:(int)ltype ccltype: (ConnectedComponentsAlgorithmsTypes) ccltype;
Swift
class func connectedComponentsWithStats(image: Mat, labels: Mat, stats: Mat, centroids: Mat, connectivity: Int32, ltype: Int32, ccltype: ConnectedComponentsAlgorithmsTypes) > Int32
Parameters
image
the 8bit singlechannel image to be labeled
labels
destination labeled image
stats
statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.
centroids
centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.
connectivity
8 or 4 for 8way or 4way connectivity respectively
ltype
output image label type. Currently CV_32S and CV_16U are supported.
ccltype
connected components algorithm type (see #ConnectedComponentsAlgorithmsTypes).

Declaration
Parameters
image
the 8bit singlechannel image to be labeled
labels
destination labeled image
stats
statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.
centroids
centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.
connectivity
8 or 4 for 8way or 4way connectivity respectively
ltype
output image label type. Currently CV_32S and CV_16U are supported.

Declaration
Parameters
image
the 8bit singlechannel image to be labeled
labels
destination labeled image
stats
statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.
centroids
centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.
connectivity
8 or 4 for 8way or 4way connectivity respectively

Declaration
Parameters
image
the 8bit singlechannel image to be labeled
labels
destination labeled image
stats
statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.
centroids
centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.

Fills a connected component with the given color.
The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at
(x,y)is considered to belong to the repainted domain if:in case of a grayscale image and floating range
\texttt{src} (x’,y’) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}in case of a grayscale image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}in case of a color image and floating range
\texttt{src} (x’,y’)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,\texttt{src} (x’,y’)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _gand\texttt{src} (x’,y’)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _bin case of a color image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _gand\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b
where
src(x’,y’)is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to: Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
 Color/brightness of the seed point in case of a fixed range.
Use these functions to either mark a connected component with the specified color inplace, or build a mask and then extract the contour, or copy the region to another image, and so on.
Note
Since the mask is larger than the filled image, a pixel
(x, y)in image corresponds to the pixel(x+1, y+1)in the mask .Declaration
Parameters
image
Input/output 1 or 3channel, 8bit, or floatingpoint image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
mask
Operation mask that should be a singlechannel 8bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Floodfilling cannot go across nonzero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.
seedPoint
Starting point.
newVal
New value of the repainted domain pixels.
loDiff
Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
upDiff
Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
rect
Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain.
flags
Operation flags. The first 8 bits contain a connectivity value. The default value of 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (816) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4  ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bitwise or (), see #FloodFillFlags.

Fills a connected component with the given color.
The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at
(x,y)is considered to belong to the repainted domain if:in case of a grayscale image and floating range
\texttt{src} (x’,y’) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}in case of a grayscale image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}in case of a color image and floating range
\texttt{src} (x’,y’)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,\texttt{src} (x’,y’)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _gand\texttt{src} (x’,y’)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _bin case of a color image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _gand\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b
where
src(x’,y’)is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to: Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
 Color/brightness of the seed point in case of a fixed range.
Use these functions to either mark a connected component with the specified color inplace, or build a mask and then extract the contour, or copy the region to another image, and so on.
Note
Since the mask is larger than the filled image, a pixel
(x, y)in image corresponds to the pixel(x+1, y+1)in the mask .Declaration
Parameters
image
Input/output 1 or 3channel, 8bit, or floatingpoint image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
mask
Operation mask that should be a singlechannel 8bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Floodfilling cannot go across nonzero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.
seedPoint
Starting point.
newVal
New value of the repainted domain pixels.
loDiff
Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
upDiff
Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
rect
Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (816) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4  ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bitwise or (), see #FloodFillFlags.

Fills a connected component with the given color.
The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at
(x,y)is considered to belong to the repainted domain if:in case of a grayscale image and floating range
\texttt{src} (x’,y’) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}in case of a grayscale image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}in case of a color image and floating range
\texttt{src} (x’,y’)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,\texttt{src} (x’,y’)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _gand\texttt{src} (x’,y’)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _bin case of a color image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _gand\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b
where
src(x’,y’)is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to: Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
 Color/brightness of the seed point in case of a fixed range.
Use these functions to either mark a connected component with the specified color inplace, or build a mask and then extract the contour, or copy the region to another image, and so on.
Note
Since the mask is larger than the filled image, a pixel
(x, y)in image corresponds to the pixel(x+1, y+1)in the mask .Declaration
Parameters
image
Input/output 1 or 3channel, 8bit, or floatingpoint image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
mask
Operation mask that should be a singlechannel 8bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Floodfilling cannot go across nonzero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.
seedPoint
Starting point.
newVal
New value of the repainted domain pixels.
loDiff
Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. one of its neighbors belonging to the component, or a seed pixel being added to the component.
rect
Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (816) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4  ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bitwise or (), see #FloodFillFlags.

Fills a connected component with the given color.
The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at
(x,y)is considered to belong to the repainted domain if:in case of a grayscale image and floating range
\texttt{src} (x’,y’) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}in case of a grayscale image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}in case of a color image and floating range
\texttt{src} (x’,y’)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,\texttt{src} (x’,y’)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _gand\texttt{src} (x’,y’)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _bin case of a color image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _gand\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b
where
src(x’,y’)is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to: Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
 Color/brightness of the seed point in case of a fixed range.
Use these functions to either mark a connected component with the specified color inplace, or build a mask and then extract the contour, or copy the region to another image, and so on.
Note
Since the mask is larger than the filled image, a pixel
(x, y)in image corresponds to the pixel(x+1, y+1)in the mask .Declaration
Parameters
image
Input/output 1 or 3channel, 8bit, or floatingpoint image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
mask
Operation mask that should be a singlechannel 8bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Floodfilling cannot go across nonzero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.
seedPoint
Starting point.
newVal
New value of the repainted domain pixels. one of its neighbors belonging to the component, or a seed pixel being added to the component. one of its neighbors belonging to the component, or a seed pixel being added to the component.
rect
Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (816) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4  ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bitwise or (), see #FloodFillFlags.

Fills a connected component with the given color.
The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at
(x,y)is considered to belong to the repainted domain if:in case of a grayscale image and floating range
\texttt{src} (x’,y’) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}in case of a grayscale image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y) \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}in case of a color image and floating range
\texttt{src} (x’,y’)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,\texttt{src} (x’,y’)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _gand\texttt{src} (x’,y’)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _bin case of a color image and fixed range
\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _gand\texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b
where
src(x’,y’)is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to: Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
 Color/brightness of the seed point in case of a fixed range.
Use these functions to either mark a connected component with the specified color inplace, or build a mask and then extract the contour, or copy the region to another image, and so on.
Note
Since the mask is larger than the filled image, a pixel
(x, y)in image corresponds to the pixel(x+1, y+1)in the mask .Declaration
Parameters
image
Input/output 1 or 3channel, 8bit, or floatingpoint image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
mask
Operation mask that should be a singlechannel 8bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Floodfilling cannot go across nonzero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.
seedPoint
Starting point.
newVal
New value of the repainted domain pixels. one of its neighbors belonging to the component, or a seed pixel being added to the component. one of its neighbors belonging to the component, or a seed pixel being added to the component. repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (816) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4  ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bitwise or (), see #FloodFillFlags.

Finds out if there is any intersection between two rotated rectangles.
If there is then the vertices of the intersecting region are returned as well.
Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function.
Declaration
ObjectiveC
+ (int)rotatedRectangleIntersection:(nonnull RotatedRect *)rect1 rect2:(nonnull RotatedRect *)rect2 intersectingRegion:(nonnull Mat *)intersectingRegion;
Swift
class func rotatedRectangleIntersection(rect1: RotatedRect, rect2: RotatedRect, intersectingRegion: Mat) > Int32
Parameters
rect1
First rectangle
rect2
Second rectangle
intersectingRegion
The output array of the vertices of the intersecting region. It returns at most 8 vertices. Stored as std::vector<cv::Point2f> or cv::Mat as Mx1 of type CV_32FC2.
Return Value
One of #RectanglesIntersectTypes

\overload
Finds edges in an image using the Canny algorithm with custom image gradient.
Declaration
Parameters
dx
16bit x derivative of input image (CV_16SC1 or CV_16SC3).
dy
16bit y derivative of input image (same type as dx).
edges
output edge map; single channels 8bit image, which has the same size as image .
threshold1
first threshold for the hysteresis procedure.
threshold2
second threshold for the hysteresis procedure.
L2gradient
a flag, indicating whether a more accurate
L_2norm=\sqrt{(dI/dx)^2 + (dI/dy)^2}should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the defaultL_1norm=dI/dx+dI/dyis enough ( L2gradient=false ). 
\overload
Finds edges in an image using the Canny algorithm with custom image gradient.
Declaration
Parameters
dx
16bit x derivative of input image (CV_16SC1 or CV_16SC3).
dy
16bit y derivative of input image (same type as dx).
edges
output edge map; single channels 8bit image, which has the same size as image .
threshold1
first threshold for the hysteresis procedure.
threshold2
second threshold for the hysteresis procedure.
=\sqrt{(dI/dx)^2 + (dI/dy)^2}should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the defaultL_1norm=dI/dx+dI/dyis enough ( L2gradient=false ). 
Finds edges in an image using the Canny algorithm CITE: Canny86 .
The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector
Declaration
Parameters
image
8bit input image.
edges
output edge map; single channels 8bit image, which has the same size as image .
threshold1
first threshold for the hysteresis procedure.
threshold2
second threshold for the hysteresis procedure.
apertureSize
aperture size for the Sobel operator.
L2gradient
a flag, indicating whether a more accurate
L_2norm=\sqrt{(dI/dx)^2 + (dI/dy)^2}should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the defaultL_1norm=dI/dx+dI/dyis enough ( L2gradient=false ). 
Finds edges in an image using the Canny algorithm CITE: Canny86 .
The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector
Declaration
Parameters
image
8bit input image.
edges
output edge map; single channels 8bit image, which has the same size as image .
threshold1
first threshold for the hysteresis procedure.
threshold2
second threshold for the hysteresis procedure.
apertureSize
aperture size for the Sobel operator.
=\sqrt{(dI/dx)^2 + (dI/dy)^2}should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the defaultL_1norm=dI/dx+dI/dyis enough ( L2gradient=false ). 
Finds edges in an image using the Canny algorithm CITE: Canny86 .
The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector
Declaration
Parameters
image
8bit input image.
edges
output edge map; single channels 8bit image, which has the same size as image .
threshold1
first threshold for the hysteresis procedure.
threshold2
second threshold for the hysteresis procedure.
=\sqrt{(dI/dx)^2 + (dI/dy)^2}should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the defaultL_1norm=dI/dx+dI/dyis enough ( L2gradient=false ). 
Blurs an image using a Gaussian filter.
The function convolves the source image with the specified Gaussian kernel. Inplace filtering is supported.
Declaration
ObjectiveC
+ (void)GaussianBlur:(nonnull Mat *)src dst:(nonnull Mat *)dst ksize:(nonnull Size2i *)ksize sigmaX:(double)sigmaX sigmaY:(double)sigmaY borderType:(BorderTypes)borderType;
Swift
class func GaussianBlur(src: Mat, dst: Mat, ksize: Size2i, sigmaX: Double, sigmaY: Double, borderType: BorderTypes)
Parameters
src
input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
output image of the same size and type as src.
ksize
Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma.
sigmaX
Gaussian kernel standard deviation in X direction.
sigmaY
Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see #getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
borderType
pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

Blurs an image using a Gaussian filter.
The function convolves the source image with the specified Gaussian kernel. Inplace filtering is supported.
Declaration
Parameters
src
input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
output image of the same size and type as src.
ksize
Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma.
sigmaX
Gaussian kernel standard deviation in X direction.
sigmaY
Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see #getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

Blurs an image using a Gaussian filter.
The function convolves the source image with the specified Gaussian kernel. Inplace filtering is supported.
Declaration
Parameters
src
input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
output image of the same size and type as src.
ksize
Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma.
sigmaX
Gaussian kernel standard deviation in X direction. equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see #getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

Finds circles in a grayscale image using the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.
Declaration
ObjectiveC
+ (void)HoughCircles:(nonnull Mat *)image circles:(nonnull Mat *)circles method:(HoughModes)method dp:(double)dp minDist:(double)minDist param1:(double)param1 param2:(double)param2 minRadius:(int)minRadius maxRadius:(int)maxRadius;
Swift
class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double, param2: Double, minRadius: Int32, maxRadius: Int32)
Parameters
image
8bit, singlechannel, grayscale input image.
circles
Output vector of found circles. Each vector is encoded as 3 or 4 element floatingpoint vector
(x, y, radius)or(x, y, radius, votes).method
Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.
dp
Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDist
Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1
First methodspecific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.
param2
Second methodspecific parameter. In case of #HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles.
minRadius
Minimum circle radius.
maxRadius
Maximum circle radius. If <= 0, uses the maximum image dimension. If < 0, #HOUGH_GRADIENT returns centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

Finds circles in a grayscale image using the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.
Declaration
ObjectiveC
+ (void)HoughCircles:(nonnull Mat *)image circles:(nonnull Mat *)circles method:(HoughModes)method dp:(double)dp minDist:(double)minDist param1:(double)param1 param2:(double)param2 minRadius:(int)minRadius;
Swift
class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double, param2: Double, minRadius: Int32)
Parameters
image
8bit, singlechannel, grayscale input image.
circles
Output vector of found circles. Each vector is encoded as 3 or 4 element floatingpoint vector
(x, y, radius)or(x, y, radius, votes).method
Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.
dp
Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDist
Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1
First methodspecific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.
param2
Second methodspecific parameter. In case of #HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles.
minRadius
Minimum circle radius. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

Finds circles in a grayscale image using the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.
Declaration
ObjectiveC
+ (void)HoughCircles:(nonnull Mat *)image circles:(nonnull Mat *)circles method:(HoughModes)method dp:(double)dp minDist:(double)minDist param1:(double)param1 param2:(double)param2;
Swift
class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double, param2: Double)
Parameters
image
8bit, singlechannel, grayscale input image.
circles
Output vector of found circles. Each vector is encoded as 3 or 4 element floatingpoint vector
(x, y, radius)or(x, y, radius, votes).method
Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.
dp
Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDist
Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1
First methodspecific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.
param2
Second methodspecific parameter. In case of #HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

Finds circles in a grayscale image using the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.
Declaration
ObjectiveC
+ (void)HoughCircles:(nonnull Mat *)image circles:(nonnull Mat *)circles method:(HoughModes)method dp:(double)dp minDist:(double)minDist param1:(double)param1;
Swift
class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double)
Parameters
image
8bit, singlechannel, grayscale input image.
circles
Output vector of found circles. Each vector is encoded as 3 or 4 element floatingpoint vector
(x, y, radius)or(x, y, radius, votes).method
Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.
dp
Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDist
Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1
First methodspecific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images. accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

Finds circles in a grayscale image using the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.
Declaration
ObjectiveC
+ (void)HoughCircles:(nonnull Mat *)image circles:(nonnull Mat *)circles method:(HoughModes)method dp:(double)dp minDist:(double)minDist;
Swift
class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double)
Parameters
image
8bit, singlechannel, grayscale input image.
circles
Output vector of found circles. Each vector is encoded as 3 or 4 element floatingpoint vector
(x, y, radius)or(x, y, radius, votes).method
Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.
dp
Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDist
Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images. accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

Finds lines in a binary image using the standard Hough transform.
The function implements the standard or standard multiscale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 2 or 3 element vector
(\rho, \theta)or(\rho, \theta, \textrm{votes}).\rhois the distance from the coordinate origin(0,0)(topleft corner of the image).\thetais the line rotation angle in radians (0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}).\textrm{votes}is the value of accumulator.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}).srn
For the multiscale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.
stn
For the multiscale Hough transform, it is a divisor for the distance resolution theta.
min_theta
For standard and multiscale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta.
max_theta
For standard and multiscale Hough transform, maximum angle to check for lines. Must fall between min_theta and CV_PI.

Finds lines in a binary image using the standard Hough transform.
The function implements the standard or standard multiscale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 2 or 3 element vector
(\rho, \theta)or(\rho, \theta, \textrm{votes}).\rhois the distance from the coordinate origin(0,0)(topleft corner of the image).\thetais the line rotation angle in radians (0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}).\textrm{votes}is the value of accumulator.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}).srn
For the multiscale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.
stn
For the multiscale Hough transform, it is a divisor for the distance resolution theta.
min_theta
For standard and multiscale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

Finds lines in a binary image using the standard Hough transform.
The function implements the standard or standard multiscale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 2 or 3 element vector
(\rho, \theta)or(\rho, \theta, \textrm{votes}).\rhois the distance from the coordinate origin(0,0)(topleft corner of the image).\thetais the line rotation angle in radians (0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}).\textrm{votes}is the value of accumulator.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}).srn
For the multiscale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.
stn
For the multiscale Hough transform, it is a divisor for the distance resolution theta. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

Finds lines in a binary image using the standard Hough transform.
The function implements the standard or standard multiscale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 2 or 3 element vector
(\rho, \theta)or(\rho, \theta, \textrm{votes}).\rhois the distance from the coordinate origin(0,0)(topleft corner of the image).\thetais the line rotation angle in radians (0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}).\textrm{votes}is the value of accumulator.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}).srn
For the multiscale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

Finds lines in a binary image using the standard Hough transform.
The function implements the standard or standard multiscale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 2 or 3 element vector
(\rho, \theta)or(\rho, \theta, \textrm{votes}).\rhois the distance from the coordinate origin(0,0)(topleft corner of the image).\thetais the line rotation angle in radians (0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}).\textrm{votes}is the value of accumulator.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}). The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI. 
Finds line segments in a binary image using the probabilistic Hough transform.
The function implements the probabilistic Hough transform algorithm for line detection, described in CITE: Matas00
See the line detection example below: INCLUDE: snippets/imgproc_HoughLinesP.cpp This is a sample picture the function parameters have been tuned for:
And this is the output of the above program in case of the probabilistic Hough transform:
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 4element vector
(x_1, y_1, x_2, y_2), where(x_1,y_1)and(x_2, y_2)are the ending points of each detected line segment.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}).minLineLength
Minimum line length. Line segments shorter than that are rejected.
maxLineGap
Maximum allowed gap between points on the same line to link them.

Finds line segments in a binary image using the probabilistic Hough transform.
The function implements the probabilistic Hough transform algorithm for line detection, described in CITE: Matas00
See the line detection example below: INCLUDE: snippets/imgproc_HoughLinesP.cpp This is a sample picture the function parameters have been tuned for:
And this is the output of the above program in case of the probabilistic Hough transform:
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 4element vector
(x_1, y_1, x_2, y_2), where(x_1,y_1)and(x_2, y_2)are the ending points of each detected line segment.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}).minLineLength
Minimum line length. Line segments shorter than that are rejected.

Finds line segments in a binary image using the probabilistic Hough transform.
The function implements the probabilistic Hough transform algorithm for line detection, described in CITE: Matas00
See the line detection example below: INCLUDE: snippets/imgproc_HoughLinesP.cpp This is a sample picture the function parameters have been tuned for:
And this is the output of the above program in case of the probabilistic Hough transform:
Declaration
Parameters
image
8bit, singlechannel binary source image. The image may be modified by the function.
lines
Output vector of lines. Each line is represented by a 4element vector
(x_1, y_1, x_2, y_2), where(x_1,y_1)and(x_2, y_2)are the ending points of each detected line segment.rho
Distance resolution of the accumulator in pixels.
theta
Angle resolution of the accumulator in radians.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold}). 
+HoughLinesPointSet:
_lines: lines_max: threshold: min_rho: max_rho: rho_step: min_theta: max_theta: theta_step: Finds lines in a set of points using the standard Hough transform.
The function finds lines in a set of points using a modification of the Hough transform. INCLUDE: snippets/imgproc_HoughLinesPointSet.cpp
Declaration
ObjectiveC
+ (void)HoughLinesPointSet:(nonnull Mat *)_point _lines:(nonnull Mat *)_lines lines_max:(int)lines_max threshold:(int)threshold min_rho:(double)min_rho max_rho:(double)max_rho rho_step:(double)rho_step min_theta:(double)min_theta max_theta:(double)max_theta theta_step:(double)theta_step;
Parameters
_point
Input vector of points. Each vector must be encoded as a Point vector
(x,y). Type must be CV_32FC2 or CV_32SC2._lines
Output vector of found lines. Each vector is encoded as a vector
(votes, rho, theta). The larger the value of ‘votes’, the higher the reliability of the Hough line.lines_max
Max count of hough lines.
threshold
Accumulator threshold parameter. Only those lines are returned that get enough votes (
>\texttt{threshold})min_rho
Minimum Distance value of the accumulator in pixels.
max_rho
Maximum Distance value of the accumulator in pixels.
rho_step
Distance resolution of the accumulator in pixels.
min_theta
Minimum angle value of the accumulator in radians.
max_theta
Maximum angle value of the accumulator in radians.
theta_step
Angle resolution of the accumulator in radians.

Calculates the Laplacian of an image.
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
\texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}This is done when
ksize > 1
. Whenksize == 1
, the Laplacian is computed by filtering the image with the following3 \times 3aperture:\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{4}{1}{0}{1}{0}Declaration
ObjectiveC
+ (void)Laplacian:(nonnull Mat *)src dst:(nonnull Mat *)dst ddepth:(int)ddepth ksize:(int)ksize scale:(double)scale delta:(double)delta borderType:(BorderTypes)borderType;
Swift
class func Laplacian(src: Mat, dst: Mat, ddepth: Int32, ksize: Int32, scale: Double, delta: Double, borderType: BorderTypes)
Parameters
src
Source image.
dst
Destination image of the same size and the same number of channels as src .
ddepth
Desired depth of the destination image.
ksize
Aperture size used to compute the secondderivative filters. See #getDerivKernels for details. The size must be positive and odd.
scale
Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See #getDerivKernels for details.
delta
Optional delta value that is added to the results prior to storing them in dst .
borderType
Pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

Calculates the Laplacian of an image.
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
\texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}This is done when
ksize > 1
. Whenksize == 1
, the Laplacian is computed by filtering the image with the following3 \times 3aperture:\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{4}{1}{0}{1}{0}Declaration
Parameters
src
Source image.
dst
Destination image of the same size and the same number of channels as src .
ddepth
Desired depth of the destination image.
ksize
Aperture size used to compute the secondderivative filters. See #getDerivKernels for details. The size must be positive and odd.
scale
Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See #getDerivKernels for details.
delta
Optional delta value that is added to the results prior to storing them in dst .

Calculates the Laplacian of an image.
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
\texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}This is done when
ksize > 1
. Whenksize == 1
, the Laplacian is computed by filtering the image with the following3 \times 3aperture:\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{4}{1}{0}{1}{0}Declaration
Parameters
src
Source image.
dst
Destination image of the same size and the same number of channels as src .
ddepth
Desired depth of the destination image.
ksize
Aperture size used to compute the secondderivative filters. See #getDerivKernels for details. The size must be positive and odd.
scale
Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See #getDerivKernels for details.

Calculates the Laplacian of an image.
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
\texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}This is done when
ksize > 1
. Whenksize == 1
, the Laplacian is computed by filtering the image with the following3 \times 3aperture:\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{4}{1}{0}{1}{0}Declaration
Parameters
src
Source image.
dst
Destination image of the same size and the same number of channels as src .
ddepth
Desired depth of the destination image.
ksize
Aperture size used to compute the secondderivative filters. See #getDerivKernels for details. The size must be positive and odd. applied. See #getDerivKernels for details.

Calculates the Laplacian of an image.
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
\texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}This is done when
ksize > 1
. Whenksize == 1
, the Laplacian is computed by filtering the image with the following3 \times 3aperture:\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{4}{1}{0}{1}{0}Declaration
Parameters
src
Source image.
dst
Destination image of the same size and the same number of channels as src .
ddepth
Desired depth of the destination image. details. The size must be positive and odd. applied. See #getDerivKernels for details.

Calculates the first x or y image derivative using Scharr operator.
The function computes the first x or y spatial image derivative using the Scharr operator. The call
\texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}is equivalent to
\texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .See
cartToPolar
Declaration
ObjectiveC
+ (void)Scharr:(nonnull Mat *)src dst:(nonnull Mat *)dst ddepth:(int)ddepth dx:(int)dx dy:(int)dy scale:(double)scale delta:(double)delta borderType:(BorderTypes)borderType;
Swift
class func Scharr(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, scale: Double, delta: Double, borderType: BorderTypes)
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src.
ddepth
output image depth, see REF: filter_depths “combinations”
dx
order of the derivative x.
dy
order of the derivative y.
scale
optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).
delta
optional delta value that is added to the results prior to storing them in dst.
borderType
pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

Calculates the first x or y image derivative using Scharr operator.
The function computes the first x or y spatial image derivative using the Scharr operator. The call
\texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}is equivalent to
\texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .See
cartToPolar
Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src.
ddepth
output image depth, see REF: filter_depths “combinations”
dx
order of the derivative x.
dy
order of the derivative y.
scale
optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).
delta
optional delta value that is added to the results prior to storing them in dst.

Calculates the first x or y image derivative using Scharr operator.
The function computes the first x or y spatial image derivative using the Scharr operator. The call
\texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}is equivalent to
\texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .See
cartToPolar
Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src.
ddepth
output image depth, see REF: filter_depths “combinations”
dx
order of the derivative x.
dy
order of the derivative y.
scale
optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

Calculates the first x or y image derivative using Scharr operator.
The function computes the first x or y spatial image derivative using the Scharr operator. The call
\texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}is equivalent to
\texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .See
cartToPolar
Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src.
ddepth
output image depth, see REF: filter_depths “combinations”
dx
order of the derivative x.
dy
order of the derivative y. applied (see #getDerivKernels for details).

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.
In all cases except one, the
\texttt{ksize} \times \texttt{ksize}separable kernel is used to calculate the derivative. When\texttt{ksize = 1}, the3 \times 1or1 \times 3kernel is used (that is, no Gaussian smoothing is done).ksize = 1
can only be used for the first or the second x or y derivatives.There is also the special value
ksize = #FILTER_SCHARR (1)
that corresponds to the3\times3Scharr filter that may give more accurate results than the3\times3Sobel. The Scharr aperture is\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{3}{0}{3}{10}{0}{10}{3}{0}{3}for the xderivative, or transposed for the yderivative.
The function calculates an image derivative by convolving the image with the appropriate kernel:
\texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x or y image derivative. The first case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{0}{1}{2}{0}{2}{1}{0}{1}The second case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{2}{1}{0}{0}{0}{1}{2}{1}Declaration
ObjectiveC
+ (void)Sobel:(nonnull Mat *)src dst:(nonnull Mat *)dst ddepth:(int)ddepth dx:(int)dx dy:(int)dy ksize:(int)ksize scale:(double)scale delta:(double)delta borderType:(BorderTypes)borderType;
Swift
class func Sobel(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, ksize: Int32, scale: Double, delta: Double, borderType: BorderTypes)
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src .
ddepth
output image depth, see REF: filter_depths “combinations”; in the case of 8bit input images it will result in truncated derivatives.
dx
order of the derivative x.
dy
order of the derivative y.
ksize
size of the extended Sobel kernel; it must be 1, 3, 5, or 7.
scale
optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).
delta
optional delta value that is added to the results prior to storing them in dst.
borderType
pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.
In all cases except one, the
\texttt{ksize} \times \texttt{ksize}separable kernel is used to calculate the derivative. When\texttt{ksize = 1}, the3 \times 1or1 \times 3kernel is used (that is, no Gaussian smoothing is done).ksize = 1
can only be used for the first or the second x or y derivatives.There is also the special value
ksize = #FILTER_SCHARR (1)
that corresponds to the3\times3Scharr filter that may give more accurate results than the3\times3Sobel. The Scharr aperture is\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{3}{0}{3}{10}{0}{10}{3}{0}{3}for the xderivative, or transposed for the yderivative.
The function calculates an image derivative by convolving the image with the appropriate kernel:
\texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x or y image derivative. The first case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{0}{1}{2}{0}{2}{1}{0}{1}The second case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{2}{1}{0}{0}{0}{1}{2}{1}Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src .
ddepth
output image depth, see REF: filter_depths “combinations”; in the case of 8bit input images it will result in truncated derivatives.
dx
order of the derivative x.
dy
order of the derivative y.
ksize
size of the extended Sobel kernel; it must be 1, 3, 5, or 7.
scale
optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).
delta
optional delta value that is added to the results prior to storing them in dst.

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.
In all cases except one, the
\texttt{ksize} \times \texttt{ksize}separable kernel is used to calculate the derivative. When\texttt{ksize = 1}, the3 \times 1or1 \times 3kernel is used (that is, no Gaussian smoothing is done).ksize = 1
can only be used for the first or the second x or y derivatives.There is also the special value
ksize = #FILTER_SCHARR (1)
that corresponds to the3\times3Scharr filter that may give more accurate results than the3\times3Sobel. The Scharr aperture is\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{3}{0}{3}{10}{0}{10}{3}{0}{3}for the xderivative, or transposed for the yderivative.
The function calculates an image derivative by convolving the image with the appropriate kernel:
\texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x or y image derivative. The first case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{0}{1}{2}{0}{2}{1}{0}{1}The second case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{2}{1}{0}{0}{0}{1}{2}{1}Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src .
ddepth
output image depth, see REF: filter_depths “combinations”; in the case of 8bit input images it will result in truncated derivatives.
dx
order of the derivative x.
dy
order of the derivative y.
ksize
size of the extended Sobel kernel; it must be 1, 3, 5, or 7.
scale
optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.
In all cases except one, the
\texttt{ksize} \times \texttt{ksize}separable kernel is used to calculate the derivative. When\texttt{ksize = 1}, the3 \times 1or1 \times 3kernel is used (that is, no Gaussian smoothing is done).ksize = 1
can only be used for the first or the second x or y derivatives.There is also the special value
ksize = #FILTER_SCHARR (1)
that corresponds to the3\times3Scharr filter that may give more accurate results than the3\times3Sobel. The Scharr aperture is\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{3}{0}{3}{10}{0}{10}{3}{0}{3}for the xderivative, or transposed for the yderivative.
The function calculates an image derivative by convolving the image with the appropriate kernel:
\texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x or y image derivative. The first case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{0}{1}{2}{0}{2}{1}{0}{1}The second case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{2}{1}{0}{0}{0}{1}{2}{1}Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src .
ddepth
output image depth, see REF: filter_depths “combinations”; in the case of 8bit input images it will result in truncated derivatives.
dx
order of the derivative x.
dy
order of the derivative y.
ksize
size of the extended Sobel kernel; it must be 1, 3, 5, or 7. applied (see #getDerivKernels for details).

Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.
In all cases except one, the
\texttt{ksize} \times \texttt{ksize}separable kernel is used to calculate the derivative. When\texttt{ksize = 1}, the3 \times 1or1 \times 3kernel is used (that is, no Gaussian smoothing is done).ksize = 1
can only be used for the first or the second x or y derivatives.There is also the special value
ksize = #FILTER_SCHARR (1)
that corresponds to the3\times3Scharr filter that may give more accurate results than the3\times3Sobel. The Scharr aperture is\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{3}{0}{3}{10}{0}{10}{3}{0}{3}for the xderivative, or transposed for the yderivative.
The function calculates an image derivative by convolving the image with the appropriate kernel:
\texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x or y image derivative. The first case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{0}{1}{2}{0}{2}{1}{0}{1}The second case corresponds to a kernel of:
\newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{1}{2}{1}{0}{0}{0}{1}{2}{1}Declaration
Parameters
src
input image.
dst
output image of the same size and the same number of channels as src .
ddepth
output image depth, see REF: filter_depths “combinations”; in the case of 8bit input images it will result in truncated derivatives.
dx
order of the derivative x.
dy
order of the derivative y. applied (see #getDerivKernels for details).

Adds an image to the accumulator image.
The function adds src or some of its elements to dst :
\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0The function supports multichannel images. Each channel is processed independently.
The function cv::accumulate can be used, for example, to collect statistics of a scene background viewed by a still camera and for the further foregroundbackground segmentation.
Declaration
Parameters
src
Input image of type CV_8UC(n), CV_16UC(n), CV_32FC(n) or CV_64FC(n), where n is a positive integer.
dst
%Accumulator image with the same number of channels as input image, and a depth of CV_32F or CV_64F.
mask
Optional operation mask.

Adds an image to the accumulator image.
The function adds src or some of its elements to dst :
\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0The function supports multichannel images. Each channel is processed independently.
The function cv::accumulate can be used, for example, to collect statistics of a scene background viewed by a still camera and for the further foregroundbackground segmentation.
Declaration
Parameters
src
Input image of type CV_8UC(n), CV_16UC(n), CV_32FC(n) or CV_64FC(n), where n is a positive integer.
dst
%Accumulator image with the same number of channels as input image, and a depth of CV_32F or CV_64F.

Adds the perelement product of two input images to the accumulator image.
The function adds the product of two images or their selected regions to the accumulator dst :
\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src1} (x,y) \cdot \texttt{src2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0The function supports multichannel images. Each channel is processed independently.

Adds the perelement product of two input images to the accumulator image.
The function adds the product of two images or their selected regions to the accumulator dst :
\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src1} (x,y) \cdot \texttt{src2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0The function supports multichannel images. Each channel is processed independently.

Adds the square of a source image to the accumulator image.
The function adds the input image src or its selected region, raised to a power of 2, to the accumulator dst :
\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0The function supports multichannel images. Each channel is processed independently.
See
+accumulateSquare:dst:mask:
,+accumulateProduct:src2:dst:mask:
,+accumulateWeighted:dst:alpha:mask:
Declaration
Parameters
src
Input image as 1 or 3channel, 8bit or 32bit floating point.
dst
%Accumulator image with the same number of channels as input image, 32bit or 64bit floatingpoint.
mask
Optional operation mask.

Adds the square of a source image to the accumulator image.
The function adds the input image src or its selected region, raised to a power of 2, to the accumulator dst :
\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0The function supports multichannel images. Each channel is processed independently.
Declaration
Parameters
src
Input image as 1 or 3channel, 8bit or 32bit floating point.
dst
%Accumulator image with the same number of channels as input image, 32bit or 64bit floatingpoint.

Updates a running average.
The function calculates the weighted sum of the input image src and the accumulator dst so that dst becomes a running average of a frame sequence:
\texttt{dst} (x,y) \leftarrow (1 \texttt{alpha} ) \cdot \texttt{dst} (x,y) + \texttt{alpha} \cdot \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0That is, alpha regulates the update speed (how fast the accumulator “forgets” about earlier images). The function supports multichannel images. Each channel is processed independently.

Updates a running average.
The function calculates the weighted sum of the input image src and the accumulator dst so that dst becomes a running average of a frame sequence:
\texttt{dst} (x,y) \leftarrow (1 \texttt{alpha} ) \cdot \texttt{dst} (x,y) + \texttt{alpha} \cdot \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0That is, alpha regulates the update speed (how fast the accumulator “forgets” about earlier images). The function supports multichannel images. Each channel is processed independently.

Applies an adaptive threshold to an array.
The function transforms a grayscale image to a binary image according to the formulae:
 THRESH_BINARY
\newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}
 THRESH_BINARY_INV
\newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}whereT(x,y)is a threshold calculated individually for each pixel (see adaptiveMethod parameter).
The function can process the image inplace.
Declaration
ObjectiveC
+ (void)adaptiveThreshold:(nonnull Mat *)src dst:(nonnull Mat *)dst maxValue:(double)maxValue adaptiveMethod:(AdaptiveThresholdTypes)adaptiveMethod thresholdType:(ThresholdTypes)thresholdType blockSize:(int)blockSize C:(double)C;
Swift
class func adaptiveThreshold(src: Mat, dst: Mat, maxValue: Double, adaptiveMethod: AdaptiveThresholdTypes, thresholdType: ThresholdTypes, blockSize: Int32, C: Double)
Parameters
src
Source 8bit singlechannel image.
dst
Destination image of the same size and the same type as src.
maxValue
Nonzero value assigned to the pixels for which the condition is satisfied
adaptiveMethod
Adaptive thresholding algorithm to use, see #AdaptiveThresholdTypes. The #BORDER_REPLICATE  #BORDER_ISOLATED is used to process boundaries.
thresholdType
Thresholding type that must be either #THRESH_BINARY or #THRESH_BINARY_INV, see #ThresholdTypes.
blockSize
Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.
C
Constant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well.
 THRESH_BINARY

Applies a GNU Octave/MATLAB equivalent colormap on a given image.
Declaration
ObjectiveC
+ (void)applyColorMap:(nonnull Mat *)src dst:(nonnull Mat *)dst colormap:(ColormapTypes)colormap;
Swift
class func applyColorMap(src: Mat, dst: Mat, colormap: ColormapTypes)
Parameters
src
The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.
dst
The result is the colormapped source image. Note: Mat::create is called on dst.
colormap
The colormap to apply, see #ColormapTypes

Applies a user colormap on a given image.
Declaration
Parameters
src
The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.
dst
The result is the colormapped source image. Note: Mat::create is called on dst.
userColor
The colormap to apply of type CV_8UC1 or CV_8UC3 and size 256

Approximates a polygonal curve(s) with the specified precision.
The function cv::approxPolyDP approximates a curve or a polygon with another curve/polygon with less vertices so that the distance between them is less or equal to the specified precision. It uses the DouglasPeucker algorithm http://en.wikipedia.org/wiki/RamerDouglasPeucker_algorithm
Declaration
ObjectiveC
+ (void)approxPolyDP:(nonnull NSArray<Point2f *> *)curve approxCurve:(nonnull NSMutableArray<Point2f *> *)approxCurve epsilon:(double)epsilon closed:(BOOL)closed;
Swift
class func approxPolyDP(curve: [Point2f], approxCurve: NSMutableArray, epsilon: Double, closed: Bool)
Parameters
curve
Input vector of a 2D point stored in std::vector or Mat
approxCurve
Result of the approximation. The type should match the type of the input curve.
epsilon
Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation.
closed
If true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed.

Draws a arrow segment pointing from the first point to the second one.
The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.
Declaration
Parameters
img
Image.
pt1
The point the arrow starts from.
pt2
The point the arrow points to.
color
Line color.
thickness
Line thickness.
line_type
Type of the line. See #LineTypes
shift
Number of fractional bits in the point coordinates.
tipLength
The length of the arrow tip in relation to the arrow length

Draws a arrow segment pointing from the first point to the second one.
The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.
Declaration
Parameters
img
Image.
pt1
The point the arrow starts from.
pt2
The point the arrow points to.
color
Line color.
thickness
Line thickness.
line_type
Type of the line. See #LineTypes
shift
Number of fractional bits in the point coordinates.

Draws a arrow segment pointing from the first point to the second one.
The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.
Declaration
Parameters
img
Image.
pt1
The point the arrow starts from.
pt2
The point the arrow points to.
color
Line color.
thickness
Line thickness.
line_type
Type of the line. See #LineTypes

Draws a arrow segment pointing from the first point to the second one.
The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.
Declaration
Parameters
img
Image.
pt1
The point the arrow starts from.
pt2
The point the arrow points to.
color
Line color.
thickness
Line thickness.

Draws a arrow segment pointing from the first point to the second one.
The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.
Declaration
Parameters
img
Image.
pt1
The point the arrow starts from.
pt2
The point the arrow points to.
color
Line color.

Applies the bilateral filter to an image.
The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.
Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.
Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for realtime applications, and perhaps d=9 for offline applications that need heavy noise filtering.
This filter does not work inplace.
Declaration
ObjectiveC
+ (void)bilateralFilter:(nonnull Mat *)src dst:(nonnull Mat *)dst d:(int)d sigmaColor:(double)sigmaColor sigmaSpace:(double)sigmaSpace borderType:(BorderTypes)borderType;
Swift
class func bilateralFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double, borderType: BorderTypes)
Parameters
src
Source 8bit or floatingpoint, 1channel or 3channel image.
dst
Destination image of the same size and type as src .
d
Diameter of each pixel neighborhood that is used during filtering. If it is nonpositive, it is computed from sigmaSpace.
sigmaColor
Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semiequal color.
sigmaSpace
Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.
borderType
border mode used to extrapolate pixels outside of the image, see #BorderTypes

Applies the bilateral filter to an image.
The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.
Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.
Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for realtime applications, and perhaps d=9 for offline applications that need heavy noise filtering.
This filter does not work inplace.
Declaration
Parameters
src
Source 8bit or floatingpoint, 1channel or 3channel image.
dst
Destination image of the same size and type as src .
d
Diameter of each pixel neighborhood that is used during filtering. If it is nonpositive, it is computed from sigmaSpace.
sigmaColor
Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semiequal color.
sigmaSpace
Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.

Blurs an image using the normalized box filter.
The function smooths an image using the kernel:
\texttt{K} = \frac{1}{\texttt{ksize.width*ksize.height}} \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \ \end{bmatrix}The call
blur(src, dst, ksize, anchor, borderType)
is equivalent toboxFilter(src, dst, src.type(), ksize, anchor, true, borderType)
.Declaration
ObjectiveC
+ (void)blur:(nonnull Mat *)src dst:(nonnull Mat *)dst ksize:(nonnull Size2i *)ksize anchor:(nonnull Point2i *)anchor borderType:(BorderTypes)borderType;
Swift
class func blur(src: Mat, dst: Mat, ksize: Size2i, anchor: Point2i, borderType: BorderTypes)
Parameters
src
input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
output image of the same size and type as src.
ksize
blurring kernel size.
anchor
anchor point; default value Point(1,1) means that the anchor is at the kernel center.
borderType
border mode used to extrapolate pixels outside of the image, see #BorderTypes. #BORDER_WRAP is not supported.

Blurs an image using the normalized box filter.
The function smooths an image using the kernel:
\texttt{K} = \frac{1}{\texttt{ksize.width*ksize.height}} \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \ \end{bmatrix}The call
blur(src, dst, ksize, anchor, borderType)
is equivalent toboxFilter(src, dst, src.type(), ksize, anchor, true, borderType)
.Declaration
Parameters
src
input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
output image of the same size and type as src.
ksize
blurring kernel size.
anchor
anchor point; default value Point(1,1) means that the anchor is at the kernel center.

Blurs an image using the normalized box filter.
The function smooths an image using the kernel:
\texttt{K} = \frac{1}{\texttt{ksize.width*ksize.height}} \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \ \end{bmatrix}The call
blur(src, dst, ksize, anchor, borderType)
is equivalent toboxFilter(src, dst, src.type(), ksize, anchor, true, borderType)
.Declaration
Parameters
src
input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
output image of the same size and type as src.
ksize
blurring kernel size. center.

Blurs an image using the box filter.
The function smooths an image using the kernel:
\texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}where
\alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variablesize windows, use #integral.
Declaration
ObjectiveC
+ (void)boxFilter:(nonnull Mat *)src dst:(nonnull Mat *)dst ddepth:(int)ddepth ksize:(nonnull Size2i *)ksize anchor:(nonnull Point2i *)anchor normalize:(BOOL)normalize borderType:(BorderTypes)borderType;
Swift
class func boxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i, normalize: Bool, borderType: BorderTypes)
Parameters
src
input image.
dst
output image of the same size and type as src.
ddepth
the output image depth (1 to use src.depth()).
ksize
blurring kernel size.
anchor
anchor point; default value Point(1,1) means that the anchor is at the kernel center.
normalize
flag, specifying whether the kernel is normalized by its area or not.
borderType
border mode used to extrapolate pixels outside of the image, see #BorderTypes. #BORDER_WRAP is not supported.

Blurs an image using the box filter.
The function smooths an image using the kernel:
\texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}where
\alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variablesize windows, use #integral.
Declaration
Parameters
src
input image.
dst
output image of the same size and type as src.
ddepth
the output image depth (1 to use src.depth()).
ksize
blurring kernel size.
anchor
anchor point; default value Point(1,1) means that the anchor is at the kernel center.
normalize
flag, specifying whether the kernel is normalized by its area or not.

Blurs an image using the box filter.
The function smooths an image using the kernel:
\texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}where
\alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variablesize windows, use #integral.
Declaration
Parameters
src
input image.
dst
output image of the same size and type as src.
ddepth
the output image depth (1 to use src.depth()).
ksize
blurring kernel size.
anchor
anchor point; default value Point(1,1) means that the anchor is at the kernel center.

Blurs an image using the box filter.
The function smooths an image using the kernel:
\texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}where
\alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variablesize windows, use #integral.
Declaration
Parameters
src
input image.
dst
output image of the same size and type as src.
ddepth
the output image depth (1 to use src.depth()).
ksize
blurring kernel size. center.

Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.
The function finds the four vertices of a rotated rectangle. This function is useful to draw the rectangle. In C++, instead of using this function, you can directly use RotatedRect::points method. Please visit the REF: tutorial_bounding_rotated_ellipses “tutorial on Creating Bounding rotated boxes and ellipses for contours” for more information.
Declaration
ObjectiveC
+ (void)boxPoints:(nonnull RotatedRect *)box points:(nonnull Mat *)points;
Swift
class func boxPoints(box: RotatedRect, points: Mat)
Parameters
box
The input rotated rectangle. It may be the output of
points
The output array of four vertices of rectangles.

Declaration
ObjectiveC
+ (void)calcBackProject:(NSArray<Mat*>*)images channels:(IntVector*)channels hist:(Mat*)hist dst:(Mat*)dst ranges:(FloatVector*)ranges scale:(double)scale NS_SWIFT_NAME(calcBackProject(images:channels:hist:dst:ranges:scale:));
Swift
class func calcBackProject(images: [Mat], channels: IntVector, hist: Mat, dst: Mat, ranges: FloatVector, scale: Double)

Draws a circle.
The function cv::circle draws a simple or filled circle with a given center and radius.
Declaration
Parameters
img
Image where the circle is drawn.
center
Center of the circle.
radius
Radius of the circle.
color
Circle color.
thickness
Thickness of the circle outline, if positive. Negative values, like #FILLED, mean that a filled circle is to be drawn.
lineType
Type of the circle boundary. See #LineTypes
shift
Number of fractional bits in the coordinates of the center and in the radius value.

Draws a circle.
The function cv::circle draws a simple or filled circle with a given center and radius.
Declaration
Parameters
img
Image where the circle is drawn.
center
Center of the circle.
radius
Radius of the circle.
color
Circle color.
thickness
Thickness of the circle outline, if positive. Negative values, like #FILLED, mean that a filled circle is to be drawn.
lineType
Type of the circle boundary. See #LineTypes

Draws a circle.
The function cv::circle draws a simple or filled circle with a given center and radius.
Declaration
Parameters
img
Image where the circle is drawn.
center
Center of the circle.
radius
Radius of the circle.
color
Circle color.
thickness
Thickness of the circle outline, if positive. Negative values, like #FILLED, mean that a filled circle is to be drawn.

Draws a circle.
The function cv::circle draws a simple or filled circle with a given center and radius.
Declaration
Parameters
img
Image where the circle is drawn.
center
Center of the circle.
radius
Radius of the circle.
color
Circle color. mean that a filled circle is to be drawn.

Converts image transformation maps from one representation to another.
The function converts a pair of maps for remap from one representation to another. The following options ( (map1.type(), map2.type())
\rightarrow(dstmap1.type(), dstmap2.type()) ) are supported: \texttt{(CV\_32FC1, CV\_32FC1)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}. This is the most frequently used conversion operation, in which the original floatingpoint maps (see remap ) are converted to a more compact and much faster fixedpoint representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false ) contains indices in the interpolation tables.
 \texttt{(CV\_32FC2)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}. The same as above but the original maps are stored in one 2channel matrix.
Reverse conversion. Obviously, the reconstructed floatingpoint maps will not be exactly the same as the originals.
See
+remap:dst:map1:map2:interpolation:borderMode:borderValue:
,undistort
,initUndistortRectifyMap
Declaration
Parameters
map1
The first input map of type CV_16SC2, CV_32FC1, or CV_32FC2 .
map2
The second input map of type CV_16UC1, CV_32FC1, or none (empty matrix), respectively.
dstmap1
The first output map that has the type dstmap1type and the same size as src .
dstmap2
The second output map.
dstmap1type
Type of the first output map that should be CV_16SC2, CV_32FC1, or CV_32FC2 .
nninterpolation
Flag indicating whether the fixedpoint maps are used for the nearestneighbor or for a more complex interpolation.

Converts image transformation maps from one representation to another.
The function converts a pair of maps for remap from one representation to another. The following options ( (map1.type(), map2.type())
\rightarrow(dstmap1.type(), dstmap2.type()) ) are supported: \texttt{(CV\_32FC1, CV\_32FC1)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}. This is the most frequently used conversion operation, in which the original floatingpoint maps (see remap ) are converted to a more compact and much faster fixedpoint representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false ) contains indices in the interpolation tables.
 \texttt{(CV\_32FC2)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}. The same as above but the original maps are stored in one 2channel matrix.
Reverse conversion. Obviously, the reconstructed floatingpoint maps will not be exactly the same as the originals.
See
+remap:dst:map1:map2:interpolation:borderMode:borderValue:
,undistort
,initUndistortRectifyMap
Declaration
Parameters
map1
The first input map of type CV_16SC2, CV_32FC1, or CV_32FC2 .
map2
The second input map of type CV_16UC1, CV_32FC1, or none (empty matrix), respectively.
dstmap1
The first output map that has the type dstmap1type and the same size as src .
dstmap2
The second output map.
dstmap1type
Type of the first output map that should be CV_16SC2, CV_32FC1, or CV_32FC2 . nearestneighbor or for a more complex interpolation.

Finds the convex hull of a point set.
The function cv::convexHull finds the convex hull of a 2D point set using the Sklansky’s algorithm CITE: Sklansky82 that has O(N logN) complexity in the current implementation.
Note
points
andhull
should be different arrays, inplace processing isn’t supported.Check REF: tutorial_hull “the corresponding tutorial” for more details.
useful links:
https://www.learnopencv.com/convexhullusingopencvinpythonandc/
Declaration
Parameters
points
Input 2D point set, stored in std::vector or Mat.
hull
Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.
clockwise
Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counterclockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.
returnPoints
Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector: std::vector<int> implies returnPoints=false, std::vector<Point> implies returnPoints=true.

Finds the convex hull of a point set.
The function cv::convexHull finds the convex hull of a 2D point set using the Sklansky’s algorithm CITE: Sklansky82 that has O(N logN) complexity in the current implementation.
Note
points
andhull
should be different arrays, inplace processing isn’t supported.Check REF: tutorial_hull “the corresponding tutorial” for more details.
useful links:
https://www.learnopencv.com/convexhullusingopencvinpythonandc/
Declaration
Parameters
points
Input 2D point set, stored in std::vector or Mat.
hull
Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves. Otherwise, it is oriented counterclockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards. returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector: std::vector<int> implies returnPoints=false, std::vector<Point> implies returnPoints=true.

Finds the convexity defects of a contour.
The figure below displays convexity defects of a hand contour:
Declaration
Parameters
contour
Input contour.
convexhull
Convex hull obtained using convexHull that should contain indices of the contour points that make the hull.
convexityDefects
The output vector of convexity defects. In C++ and the new Python/Java interface each convexity defect is represented as 4element integer vector (a.k.a. #Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixedpoint approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floatingpoint value of the depth will be fixpt_depth/256.0.

Calculates eigenvalues and eigenvectors of image blocks for corner detection.
For every pixel
p, the function cornerEigenValsAndVecs considers a blockSize\timesblockSize neighborhoodS(p). It calculates the covariation matrix of derivatives over the neighborhood as:M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}dI/dx dI/dy \ \sum _{S(p)}dI/dx dI/dy & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}where the derivatives are computed using the Sobel operator.
After that, it finds eigenvectors and eigenvalues of
Mand stores them in the destination image as(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)where \lambda_1, \lambda_2are the nonsorted eigenvalues ofM
 x_1, y_1are the eigenvectors corresponding to\lambda_1
 x_2, y_2are the eigenvectors corresponding to\lambda_2
The output of the function can be used for robust edge or corner detection.