Imgproc

Objective-C

@interface Imgproc : NSObject

Swift

class Imgproc : NSObject

The Imgproc module

Member classes: GeneralizedHough, GeneralizedHoughBallard, GeneralizedHoughGuil, CLAHE, Subdiv2D, LineSegmentDetector

Member enums: SmoothMethod_c, MorphShapes_c, SpecialFilter, MorphTypes, MorphShapes, InterpolationFlags, WarpPolarMode, InterpolationMasks, DistanceTypes, DistanceTransformMasks, ThresholdTypes, AdaptiveThresholdTypes, GrabCutClasses, GrabCutModes, DistanceTransformLabelTypes, FloodFillFlags, ConnectedComponentsTypes, ConnectedComponentsAlgorithmsTypes, RetrievalModes, ContourApproximationModes, ShapeMatchModes, HoughModes, LineSegmentDetectorModes, HistCompMethods, ColorConversionCodes, RectanglesIntersectTypes, LineTypes, HersheyFonts, MarkerTypes, TemplateMatchModes, ColormapTypes

Class Constants

  • Declaration

    Objective-C

    @property (class, readonly) int CV_GAUSSIAN_5x5

    Swift

    class var CV_GAUSSIAN_5x5: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_SCHARR

    Swift

    class var CV_SCHARR: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_MAX_SOBEL_KSIZE

    Swift

    class var CV_MAX_SOBEL_KSIZE: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_RGBA2mRGBA

    Swift

    class var CV_RGBA2mRGBA: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_mRGBA2RGBA

    Swift

    class var CV_mRGBA2RGBA: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_WARP_FILL_OUTLIERS

    Swift

    class var CV_WARP_FILL_OUTLIERS: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_WARP_INVERSE_MAP

    Swift

    class var CV_WARP_INVERSE_MAP: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_CHAIN_CODE

    Swift

    class var CV_CHAIN_CODE: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_LINK_RUNS

    Swift

    class var CV_LINK_RUNS: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_POLY_APPROX_DP

    Swift

    class var CV_POLY_APPROX_DP: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_CLOCKWISE

    Swift

    class var CV_CLOCKWISE: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_COUNTER_CLOCKWISE

    Swift

    class var CV_COUNTER_CLOCKWISE: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int CV_CANNY_L2_GRADIENT

    Swift

    class var CV_CANNY_L2_GRADIENT: Int32 { get }

Methods

  • Declaration

    Objective-C

    + (Mat*)getAffineTransform:(NSArray<Point2f*>*)src dst:(NSArray<Point2f*>*)dst NS_SWIFT_NAME(getAffineTransform(src:dst:));

    Swift

    class func getAffineTransform(src: [Point2f], dst: [Point2f]) -> Mat
  • Returns Gabor filter coefficients.

    For more details about gabor filter equations and parameters, see: Gabor Filter.

    Declaration

    Objective-C

    + (nonnull Mat *)getGaborKernel:(nonnull Size2i *)ksize
                              sigma:(double)sigma
                              theta:(double)theta
                              lambd:(double)lambd
                              gamma:(double)gamma
                                psi:(double)psi
                              ktype:(int)ktype;

    Swift

    class func getGaborKernel(ksize: Size2i, sigma: Double, theta: Double, lambd: Double, gamma: Double, psi: Double, ktype: Int32) -> Mat

    Parameters

    ksize

    Size of the filter returned.

    sigma

    Standard deviation of the gaussian envelope.

    theta

    Orientation of the normal to the parallel stripes of a Gabor function.

    lambd

    Wavelength of the sinusoidal factor.

    gamma

    Spatial aspect ratio.

    psi

    Phase offset.

    ktype

    Type of filter coefficients. It can be CV_32F or CV_64F .

  • Returns Gabor filter coefficients.

    For more details about gabor filter equations and parameters, see: Gabor Filter.

    Declaration

    Objective-C

    + (nonnull Mat *)getGaborKernel:(nonnull Size2i *)ksize
                              sigma:(double)sigma
                              theta:(double)theta
                              lambd:(double)lambd
                              gamma:(double)gamma
                                psi:(double)psi;

    Swift

    class func getGaborKernel(ksize: Size2i, sigma: Double, theta: Double, lambd: Double, gamma: Double, psi: Double) -> Mat

    Parameters

    ksize

    Size of the filter returned.

    sigma

    Standard deviation of the gaussian envelope.

    theta

    Orientation of the normal to the parallel stripes of a Gabor function.

    lambd

    Wavelength of the sinusoidal factor.

    gamma

    Spatial aspect ratio.

    psi

    Phase offset.

  • Returns Gabor filter coefficients.

    For more details about gabor filter equations and parameters, see: Gabor Filter.

    Declaration

    Objective-C

    + (nonnull Mat *)getGaborKernel:(nonnull Size2i *)ksize
                              sigma:(double)sigma
                              theta:(double)theta
                              lambd:(double)lambd
                              gamma:(double)gamma;

    Swift

    class func getGaborKernel(ksize: Size2i, sigma: Double, theta: Double, lambd: Double, gamma: Double) -> Mat

    Parameters

    ksize

    Size of the filter returned.

    sigma

    Standard deviation of the gaussian envelope.

    theta

    Orientation of the normal to the parallel stripes of a Gabor function.

    lambd

    Wavelength of the sinusoidal factor.

    gamma

    Spatial aspect ratio.

  • Returns Gaussian filter coefficients.

    The function computes and returns the

    \texttt{ksize} \times 1
    matrix of Gaussian filter coefficients:

    G_i= \alpha *e^{-(i-( \texttt{ksize} -1)/2)^2/(2* \texttt{sigma}^2)},

    where

    i=0..\texttt{ksize}-1
    and
    \alpha
    is the scale factor chosen so that
    \sum_i G_i=1
    .

    Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higher-level GaussianBlur.

    Declaration

    Objective-C

    + (nonnull Mat *)getGaussianKernel:(int)ksize
                                 sigma:(double)sigma
                                 ktype:(int)ktype;

    Swift

    class func getGaussianKernel(ksize: Int32, sigma: Double, ktype: Int32) -> Mat

    Parameters

    ksize

    Aperture size. It should be odd (

    \texttt{ksize} \mod 2 = 1
    ) and positive.

    sigma

    Gaussian standard deviation. If it is non-positive, it is computed from ksize as sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8.

    ktype

    Type of filter coefficients. It can be CV_32F or CV_64F .

  • Returns Gaussian filter coefficients.

    The function computes and returns the

    \texttt{ksize} \times 1
    matrix of Gaussian filter coefficients:

    G_i= \alpha *e^{-(i-( \texttt{ksize} -1)/2)^2/(2* \texttt{sigma}^2)},

    where

    i=0..\texttt{ksize}-1
    and
    \alpha
    is the scale factor chosen so that
    \sum_i G_i=1
    .

    Two of such generated kernels can be passed to sepFilter2D. Those functions automatically recognize smoothing kernels (a symmetrical kernel with sum of weights equal to 1) and handle them accordingly. You may also use the higher-level GaussianBlur.

    Declaration

    Objective-C

    + (nonnull Mat *)getGaussianKernel:(int)ksize sigma:(double)sigma;

    Swift

    class func getGaussianKernel(ksize: Int32, sigma: Double) -> Mat

    Parameters

    ksize

    Aperture size. It should be odd (

    \texttt{ksize} \mod 2 = 1
    ) and positive.

    sigma

    Gaussian standard deviation. If it is non-positive, it is computed from ksize as sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8.

  • Calculates a perspective transform from four pairs of the corresponding points.

    The function calculates the

    3 \times 3
    matrix of a perspective transform so that:

    \begin{bmatrix} t_i x’_i \ t_i y’_i \ t_i \end{bmatrix} = \texttt{map\_matrix} \cdot \begin{bmatrix} x_i \ y_i \ 1 \end{bmatrix}

    where

    dst(i)=(x’_i,y’_i), src(i)=(x_i, y_i), i=0,1,2,3

    See

    findHomography, +warpPerspective:dst:M:dsize:flags:borderMode:borderValue:, perspectiveTransform

    Declaration

    Objective-C

    + (nonnull Mat *)getPerspectiveTransform:(nonnull Mat *)src
                                         dst:(nonnull Mat *)dst
                                 solveMethod:(int)solveMethod;

    Swift

    class func getPerspectiveTransform(src: Mat, dst: Mat, solveMethod: Int32) -> Mat
  • Calculates a perspective transform from four pairs of the corresponding points.

    The function calculates the

    3 \times 3
    matrix of a perspective transform so that:

    \begin{bmatrix} t_i x’_i \ t_i y’_i \ t_i \end{bmatrix} = \texttt{map\_matrix} \cdot \begin{bmatrix} x_i \ y_i \ 1 \end{bmatrix}

    where

    dst(i)=(x’_i,y’_i), src(i)=(x_i, y_i), i=0,1,2,3

    See

    findHomography, +warpPerspective:dst:M:dsize:flags:borderMode:borderValue:, perspectiveTransform

    Declaration

    Objective-C

    + (nonnull Mat *)getPerspectiveTransform:(nonnull Mat *)src
                                         dst:(nonnull Mat *)dst;

    Swift

    class func getPerspectiveTransform(src: Mat, dst: Mat) -> Mat
  • Calculates an affine matrix of 2D rotation.

    The function calculates the following matrix:

    \begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \ - \beta & \alpha & \beta \cdot \texttt{center.x} + (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix}

    where

    \begin{array}{l} \alpha = \texttt{scale} \cdot \cos \texttt{angle} , \ \beta = \texttt{scale} \cdot \sin \texttt{angle} \end{array}

    The transformation maps the rotation center to itself. If this is not the target, adjust the shift.

    Declaration

    Objective-C

    + (nonnull Mat *)getRotationMatrix2D:(nonnull Point2f *)center
                                   angle:(double)angle
                                   scale:(double)scale;

    Swift

    class func getRotationMatrix2D(center: Point2f, angle: Double, scale: Double) -> Mat
  • Returns a structuring element of the specified size and shape for morphological operations.

    The function constructs and returns the structuring element that can be further passed to #erode, #dilate or #morphologyEx. But you can also construct an arbitrary binary mask yourself and use it as the structuring element.

    Declaration

    Objective-C

    + (nonnull Mat *)getStructuringElement:(MorphShapes)shape
                                     ksize:(nonnull Size2i *)ksize
                                    anchor:(nonnull Point2i *)anchor;

    Swift

    class func getStructuringElement(shape: MorphShapes, ksize: Size2i, anchor: Point2i) -> Mat

    Parameters

    shape

    Element shape that could be one of #MorphShapes

    ksize

    Size of the structuring element.

    anchor

    Anchor position within the element. The default value

    (-1, -1)
    means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.

  • Returns a structuring element of the specified size and shape for morphological operations.

    The function constructs and returns the structuring element that can be further passed to #erode, #dilate or #morphologyEx. But you can also construct an arbitrary binary mask yourself and use it as the structuring element.

    Declaration

    Objective-C

    + (nonnull Mat *)getStructuringElement:(MorphShapes)shape
                                     ksize:(nonnull Size2i *)ksize;

    Swift

    class func getStructuringElement(shape: MorphShapes, ksize: Size2i) -> Mat

    Parameters

    shape

    Element shape that could be one of #MorphShapes

    ksize

    Size of the structuring element. anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.

  • Calculates all of the moments up to the third order of a polygon or rasterized shape.

    The function computes moments, up to the 3rd order, of a vector shape or a rasterized shape. The results are returned in the structure cv::Moments.

    Note

    Only applicable to contour moments calculations from Python bindings: Note that the numpy type for the input array should be either np.int32 or np.float32.

    Declaration

    Objective-C

    + (nonnull Moments *)moments:(nonnull Mat *)array binaryImage:(BOOL)binaryImage;

    Swift

    class func moments(array: Mat, binaryImage: Bool) -> Moments

    Parameters

    array

    Raster image (single-channel, 8-bit or floating-point 2D array) or an array (

    1 \times N
    or
    N \times 1
    ) of 2D points (Point or Point2f ).

    binaryImage

    If it is true, all non-zero image pixels are treated as 1’s. The parameter is used for images only.

    Return Value

    moments.

  • Calculates all of the moments up to the third order of a polygon or rasterized shape.

    The function computes moments, up to the 3rd order, of a vector shape or a rasterized shape. The results are returned in the structure cv::Moments.

    Note

    Only applicable to contour moments calculations from Python bindings: Note that the numpy type for the input array should be either np.int32 or np.float32.

    Declaration

    Objective-C

    + (nonnull Moments *)moments:(nonnull Mat *)array;

    Swift

    class func moments(array: Mat) -> Moments

    Parameters

    array

    Raster image (single-channel, 8-bit or floating-point 2D array) or an array (

    1 \times N
    or
    N \times 1
    ) of 2D points (Point or Point2f ). used for images only.

    Return Value

    moments.

  • The function is used to detect translational shifts that occur between two images.

    The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation

    Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.

    The function performs the following equations:

    • First it applies a Hanning window (see http://en.wikipedia.org/wiki/Hann_function) to each image to remove possible edge effects. This window is cached until the array size changes to speed up processing time.
    • Next it computes the forward DFTs of each source array:
      \mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}
      where
      \mathcal{F}
      is the forward DFT.
    • It then computes the cross-power spectrum of each frequency domain array:
      R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{|\mathbf{G}_a \mathbf{G}_b^*|}
    • Next the cross-correlation is converted back into the time domain via the inverse DFT:
      r = \mathcal{F}^{-1}\{R\}
    • Finally, it computes the peak location and computes a 5x5 weighted centroid around the peak to achieve sub-pixel accuracy.
      (\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}
    • If non-zero, the response parameter is computed as the sum of the elements of r within the 5x5 centroid around the peak location. It is normalized to a maximum of 1 (meaning there is a single peak) and will be smaller when there are multiple peaks.

    See

    dft, getOptimalDFTSize, idft, mulSpectrums createHanningWindow

    Declaration

    Objective-C

    + (nonnull Point2d *)phaseCorrelate:(nonnull Mat *)src1
                                   src2:(nonnull Mat *)src2
                                 window:(nonnull Mat *)window
                               response:(nonnull double *)response;

    Swift

    class func phaseCorrelate(src1: Mat, src2: Mat, window: Mat, response: UnsafeMutablePointer<Double>) -> Point2d

    Parameters

    src1

    Source floating point array (CV_32FC1 or CV_64FC1)

    src2

    Source floating point array (CV_32FC1 or CV_64FC1)

    window

    Floating point array with windowing coefficients to reduce edge effects (optional).

    response

    Signal power within the 5x5 centroid around the peak, between 0 and 1 (optional).

    Return Value

    detected phase shift (sub-pixel) between the two arrays.

  • The function is used to detect translational shifts that occur between two images.

    The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation

    Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.

    The function performs the following equations:

    • First it applies a Hanning window (see http://en.wikipedia.org/wiki/Hann_function) to each image to remove possible edge effects. This window is cached until the array size changes to speed up processing time.
    • Next it computes the forward DFTs of each source array:
      \mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}
      where
      \mathcal{F}
      is the forward DFT.
    • It then computes the cross-power spectrum of each frequency domain array:
      R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{|\mathbf{G}_a \mathbf{G}_b^*|}
    • Next the cross-correlation is converted back into the time domain via the inverse DFT:
      r = \mathcal{F}^{-1}\{R\}
    • Finally, it computes the peak location and computes a 5x5 weighted centroid around the peak to achieve sub-pixel accuracy.
      (\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}
    • If non-zero, the response parameter is computed as the sum of the elements of r within the 5x5 centroid around the peak location. It is normalized to a maximum of 1 (meaning there is a single peak) and will be smaller when there are multiple peaks.

    See

    dft, getOptimalDFTSize, idft, mulSpectrums createHanningWindow

    Declaration

    Objective-C

    + (nonnull Point2d *)phaseCorrelate:(nonnull Mat *)src1
                                   src2:(nonnull Mat *)src2
                                 window:(nonnull Mat *)window;

    Swift

    class func phaseCorrelate(src1: Mat, src2: Mat, window: Mat) -> Point2d

    Parameters

    src1

    Source floating point array (CV_32FC1 or CV_64FC1)

    src2

    Source floating point array (CV_32FC1 or CV_64FC1)

    window

    Floating point array with windowing coefficients to reduce edge effects (optional).

    Return Value

    detected phase shift (sub-pixel) between the two arrays.

  • The function is used to detect translational shifts that occur between two images.

    The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation

    Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.

    The function performs the following equations:

    • First it applies a Hanning window (see http://en.wikipedia.org/wiki/Hann_function) to each image to remove possible edge effects. This window is cached until the array size changes to speed up processing time.
    • Next it computes the forward DFTs of each source array:
      \mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}
      where
      \mathcal{F}
      is the forward DFT.
    • It then computes the cross-power spectrum of each frequency domain array:
      R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{|\mathbf{G}_a \mathbf{G}_b^*|}
    • Next the cross-correlation is converted back into the time domain via the inverse DFT:
      r = \mathcal{F}^{-1}\{R\}
    • Finally, it computes the peak location and computes a 5x5 weighted centroid around the peak to achieve sub-pixel accuracy.
      (\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}
    • If non-zero, the response parameter is computed as the sum of the elements of r within the 5x5 centroid around the peak location. It is normalized to a maximum of 1 (meaning there is a single peak) and will be smaller when there are multiple peaks.

    See

    dft, getOptimalDFTSize, idft, mulSpectrums createHanningWindow

    Declaration

    Objective-C

    + (nonnull Point2d *)phaseCorrelate:(nonnull Mat *)src1
                                   src2:(nonnull Mat *)src2;

    Swift

    class func phaseCorrelate(src1: Mat, src2: Mat) -> Point2d

    Parameters

    src1

    Source floating point array (CV_32FC1 or CV_64FC1)

    src2

    Source floating point array (CV_32FC1 or CV_64FC1)

    Return Value

    detected phase shift (sub-pixel) between the two arrays.

  • Creates a smart pointer to a cv::CLAHE class and initializes it.

    Declaration

    Objective-C

    + (nonnull CLAHE *)createCLAHE:(double)clipLimit
                      tileGridSize:(nonnull Size2i *)tileGridSize;

    Swift

    class func createCLAHE(clipLimit: Double, tileGridSize: Size2i) -> CLAHE

    Parameters

    clipLimit

    Threshold for contrast limiting.

    tileGridSize

    Size of grid for histogram equalization. Input image will be divided into equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

  • Creates a smart pointer to a cv::CLAHE class and initializes it.

    Declaration

    Objective-C

    + (nonnull CLAHE *)createCLAHE:(double)clipLimit;

    Swift

    class func createCLAHE(clipLimit: Double) -> CLAHE

    Parameters

    clipLimit

    Threshold for contrast limiting. equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

  • Creates a smart pointer to a cv::CLAHE class and initializes it.

    equally sized rectangular tiles. tileGridSize defines the number of tiles in row and column.

    Declaration

    Objective-C

    + (nonnull CLAHE *)createCLAHE;

    Swift

    class func createCLAHE() -> CLAHE
  • Creates a smart pointer to a cv::GeneralizedHoughBallard class and initializes it.

    Declaration

    Objective-C

    + (nonnull GeneralizedHoughBallard *)createGeneralizedHoughBallard;

    Swift

    class func createGeneralizedHoughBallard() -> GeneralizedHoughBallard
  • Creates a smart pointer to a cv::GeneralizedHoughGuil class and initializes it.

    Declaration

    Objective-C

    + (nonnull GeneralizedHoughGuil *)createGeneralizedHoughGuil;

    Swift

    class func createGeneralizedHoughGuil() -> GeneralizedHoughGuil
  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale
                                                  _sigma_scale:(double)_sigma_scale
                                                        _quant:(double)_quant
                                                       _ang_th:(double)_ang_th
                                                      _log_eps:(double)_log_eps
                                                   _density_th:(double)_density_th
                                                       _n_bins:(int)_n_bins;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double, _log_eps: Double, _density_th: Double, _n_bins: Int32) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1].

    _sigma_scale

    Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.

    _quant

    Bound to the quantization error on the gradient norm.

    _ang_th

    Gradient angle tolerance in degrees.

    _log_eps

    Detection threshold: -log10(NFA) > log_eps. Used only when advance refinement is chosen.

    _density_th

    Minimal density of aligned region points in the enclosing rectangle.

    _n_bins

    Number of bins in pseudo-ordering of gradient modulus.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale
                                                  _sigma_scale:(double)_sigma_scale
                                                        _quant:(double)_quant
                                                       _ang_th:(double)_ang_th
                                                      _log_eps:(double)_log_eps
                                                   _density_th:(double)_density_th;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double, _log_eps: Double, _density_th: Double) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1].

    _sigma_scale

    Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.

    _quant

    Bound to the quantization error on the gradient norm.

    _ang_th

    Gradient angle tolerance in degrees.

    _log_eps

    Detection threshold: -log10(NFA) > log_eps. Used only when advance refinement is chosen.

    _density_th

    Minimal density of aligned region points in the enclosing rectangle.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale
                                                  _sigma_scale:(double)_sigma_scale
                                                        _quant:(double)_quant
                                                       _ang_th:(double)_ang_th
                                                      _log_eps:(double)_log_eps;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double, _log_eps: Double) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1].

    _sigma_scale

    Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.

    _quant

    Bound to the quantization error on the gradient norm.

    _ang_th

    Gradient angle tolerance in degrees.

    _log_eps

    Detection threshold: -log10(NFA) > log_eps. Used only when advance refinement is chosen.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale
                                                  _sigma_scale:(double)_sigma_scale
                                                        _quant:(double)_quant
                                                       _ang_th:(double)_ang_th;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double, _ang_th: Double) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1].

    _sigma_scale

    Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.

    _quant

    Bound to the quantization error on the gradient norm.

    _ang_th

    Gradient angle tolerance in degrees. is chosen.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale
                                                  _sigma_scale:(double)_sigma_scale
                                                        _quant:(double)_quant;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double, _quant: Double) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1].

    _sigma_scale

    Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.

    _quant

    Bound to the quantization error on the gradient norm. is chosen.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale
                                                  _sigma_scale:(double)_sigma_scale;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double, _sigma_scale: Double) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1].

    _sigma_scale

    Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale. is chosen.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
                                         (LineSegmentDetectorModes)_refine
                                                        _scale:(double)_scale;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes, _scale: Double) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes

    _scale

    The scale of the image that will be used to find the lines. Range (0..1]. is chosen.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector:
        (LineSegmentDetectorModes)_refine;

    Swift

    class func createLineSegmentDetector(_refine: LineSegmentDetectorModes) -> LineSegmentDetector

    Parameters

    _refine

    The way found lines will be refined, see #LineSegmentDetectorModes is chosen.

  • Creates a smart pointer to a LineSegmentDetector object and initializes it.

    The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

    is chosen.

    Note

    Implementation has been removed due original code license conflict

    Declaration

    Objective-C

    + (nonnull LineSegmentDetector *)createLineSegmentDetector;

    Swift

    class func createLineSegmentDetector() -> LineSegmentDetector
  • Calculates the up-right bounding rectangle of a point set or non-zero pixels of gray-scale image.

    The function calculates and returns the minimal up-right bounding rectangle for the specified point set or non-zero pixels of gray-scale image.

    Declaration

    Objective-C

    + (nonnull Rect2i *)boundingRect:(nonnull Mat *)array;

    Swift

    class func boundingRect(array: Mat) -> Rect2i

    Parameters

    array

    Input gray-scale image or 2D point set, stored in std::vector or Mat.

  • Fits an ellipse around a set of 2D points.

    The function calculates the ellipse that fits (in a least-squares sense) a set of 2D points best of all. It returns the rotated rectangle in which the ellipse is inscribed. The first algorithm described by CITE: Fitzgibbon95 is used. Developer should keep in mind that it is possible that the returned ellipse/rotatedRect data contains negative indices, due to the data points being close to the border of the containing Mat element.

    Declaration

    Objective-C

    + (nonnull RotatedRect *)fitEllipse:(nonnull NSArray<Point2f *> *)points;

    Swift

    class func fitEllipse(points: [Point2f]) -> RotatedRect

    Parameters

    points

    Input 2D point set, stored in std::vector<> or Mat

  • Fits an ellipse around a set of 2D points.

    The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square (AMS) proposed by CITE: Taubin1991 is used.

    For an ellipse, this basis set is

    \chi= \left(x^2, x y, y^2, x, y, 1\right)
    , which is a set of six free coefficients
    A^T=\left\{A_{\text{xx}},A_{\text{xy}},A_{\text{yy}},A_x,A_y,A_0\right\}
    . However, to specify an ellipse, all that is needed is five numbers; the major and minor axes lengths
    (a,b)
    , the position
    (x_0,y_0)
    , and the orientation
    \theta
    . This is because the basis set includes lines, quadratics, parabolic and hyperbolic functions as well as elliptical functions as possible fits. If the fit is found to be a parabolic or hyperbolic function then the standard #fitEllipse method is used. The AMS method restricts the fit to parabolic, hyperbolic and elliptical curves by imposing the condition that
    A^T ( D_x^T D_x + D_y^T D_y) A = 1
    where the matrices
    Dx
    and
    Dy
    are the partial derivatives of the design matrix
    D
    with respect to x and y. The matrices are formed row by row applying the following to each of the points in the set:
    \begin{aligned} D(i,:)&=\left\{x_i^2, x_i y_i, y_i^2, x_i, y_i, 1\right\} & D_x(i,:)&=\left\{2 x_i,y_i,0,1,0,0\right\} & D_y(i,:)&=\left\{0,x_i,2 y_i,0,1,0\right\} \end{aligned}
    The AMS method minimizes the cost function
    \begin{aligned} \epsilon ^2=\frac{ A^T D^T D A }{ A^T (D_x^T D_x + D_y^T D_y) A^T } \end{aligned}

    The minimum cost is found by solving the generalized eigenvalue problem.

    \begin{aligned} D^T D A = \lambda \left( D_x^T D_x + D_y^T D_y\right) A \end{aligned}

    Declaration

    Objective-C

    + (nonnull RotatedRect *)fitEllipseAMS:(nonnull Mat *)points;

    Swift

    class func fitEllipseAMS(points: Mat) -> RotatedRect

    Parameters

    points

    Input 2D point set, stored in std::vector<> or Mat

  • Fits an ellipse around a set of 2D points.

    The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square (Direct) method by CITE: Fitzgibbon1999 is used.

    For an ellipse, this basis set is

    \chi= \left(x^2, x y, y^2, x, y, 1\right)
    , which is a set of six free coefficients
    A^T=\left\{A_{\text{xx}},A_{\text{xy}},A_{\text{yy}},A_x,A_y,A_0\right\}
    . However, to specify an ellipse, all that is needed is five numbers; the major and minor axes lengths
    (a,b)
    , the position
    (x_0,y_0)
    , and the orientation
    \theta
    . This is because the basis set includes lines, quadratics, parabolic and hyperbolic functions as well as elliptical functions as possible fits. The Direct method confines the fit to ellipses by ensuring that
    4 A_{xx} A_{yy}- A_{xy}^2 > 0
    . The condition imposed is that
    4 A_{xx} A_{yy}- A_{xy}^2=1
    which satisfies the inequality and as the coefficients can be arbitrarily scaled is not overly restrictive.

    \begin{aligned} \epsilon ^2= A^T D^T D A \quad \text{with} \quad A^T C A =1 \quad \text{and} \quad C=\left(\begin{matrix} 0 & 0 & 2 & 0 & 0 & 0 \ 0 & -1 & 0 & 0 & 0 & 0 \ 2 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 \end{matrix} \right) \end{aligned}

    The minimum cost is found by solving the generalized eigenvalue problem.

    \begin{aligned} D^T D A = \lambda \left( C\right) A \end{aligned}

    The system produces only one positive eigenvalue

    \lambda
    which is chosen as the solution with its eigenvector
    \mathbf{u}
    . These are used to find the coefficients

    \begin{aligned} A = \sqrt{\frac{1}{\mathbf{u}^T C \mathbf{u}}} \mathbf{u} \end{aligned}
    The scaling factor guarantees that
    A^T C A =1
    .

    Declaration

    Objective-C

    + (nonnull RotatedRect *)fitEllipseDirect:(nonnull Mat *)points;

    Swift

    class func fitEllipseDirect(points: Mat) -> RotatedRect

    Parameters

    points

    Input 2D point set, stored in std::vector<> or Mat

  • Finds a rotated rectangle of the minimum area enclosing the input 2D point set.

    The function calculates and returns the minimum-area bounding rectangle (possibly rotated) for a specified point set. Developer should keep in mind that the returned RotatedRect can contain negative indices when data is close to the containing Mat element boundary.

    Declaration

    Objective-C

    + (nonnull RotatedRect *)minAreaRect:(nonnull NSArray<Point2f *> *)points;

    Swift

    class func minAreaRect(points: [Point2f]) -> RotatedRect

    Parameters

    points

    Input vector of 2D points, stored in std::vector<> or Mat

  • Calculates the width and height of a text string.

    The function cv::getTextSize calculates and returns the size of a box that contains the specified text. That is, the following code renders some text, the tight box surrounding it, and the baseline: :

     String text = "Funny text inside the box";
     int fontFace = FONT_HERSHEY_SCRIPT_SIMPLEX;
     double fontScale = 2;
     int thickness = 3;
    
     Mat img(600, 800, CV_8UC3, Scalar::all(0));
    
     int baseline=0;
     Size textSize = getTextSize(text, fontFace,
                                 fontScale, thickness, &baseline);
     baseline += thickness;
    
     // center the text
     Point textOrg((img.cols - textSize.width)/2,
                   (img.rows + textSize.height)/2);
    
     // draw the box
     rectangle(img, textOrg + Point(0, baseline),
               textOrg + Point(textSize.width, -textSize.height),
               Scalar(0,0,255));
     // ... and the baseline first
     line(img, textOrg + Point(0, thickness),
          textOrg + Point(textSize.width, thickness),
          Scalar(0, 0, 255));
    
     // then put the text itself
     putText(img, text, textOrg, fontFace, fontScale,
             Scalar::all(255), thickness, 8);
    

    Declaration

    Objective-C

    + (nonnull Size2i *)getTextSize:(nonnull NSString *)text
                           fontFace:(HersheyFonts)fontFace
                          fontScale:(double)fontScale
                          thickness:(int)thickness
                           baseLine:(nonnull int *)baseLine;

    Swift

    class func getTextSize(text: String, fontFace: HersheyFonts, fontScale: Double, thickness: Int32, baseLine: UnsafeMutablePointer<Int32>) -> Size2i

    Parameters

    text

    Input text string.

    fontFace

    Font to use, see #HersheyFonts.

    fontScale

    Font scale factor that is multiplied by the font-specific base size.

    thickness

    Thickness of lines used to render the text. See #putText for details.

    baseLine

    y-coordinate of the baseline relative to the bottom-most text point.

    Return Value

    The size of a box that contains the specified text.

  • Declaration

    Objective-C

    + (BOOL)clipLine:(nonnull Rect2i *)imgRect
                 pt1:(nonnull Point2i *)pt1
                 pt2:(nonnull Point2i *)pt2;

    Swift

    class func clipLine(imgRect: Rect2i, pt1: Point2i, pt2: Point2i) -> Bool

    Parameters

    imgRect

    Image rectangle.

    pt1

    First line point.

    pt2

    Second line point.

  • Tests a contour convexity.

    The function tests whether the input contour is convex or not. The contour must be simple, that is, without self-intersections. Otherwise, the function output is undefined.

    Declaration

    Objective-C

    + (BOOL)isContourConvex:(nonnull NSArray<Point2i *> *)contour;

    Swift

    class func isContourConvex(contour: [Point2i]) -> Bool

    Parameters

    contour

    Input vector of 2D points, stored in std::vector<> or Mat

  • Calculates a contour perimeter or a curve length.

    The function computes a curve length or a closed contour perimeter.

    Declaration

    Objective-C

    + (double)arcLength:(nonnull NSArray<Point2f *> *)curve closed:(BOOL)closed;

    Swift

    class func arcLength(curve: [Point2f], closed: Bool) -> Double

    Parameters

    curve

    Input vector of 2D points, stored in std::vector or Mat.

    closed

    Flag indicating whether the curve is closed or not.

  • Compares two histograms.

    The function cv::compareHist compares two dense or two sparse histograms using the specified method.

    The function returns

    d(H_1, H_2)
    .

    While the function works well with 1-, 2-, 3-dimensional dense histograms, it may not be suitable for high-dimensional sparse histograms. In such histograms, because of aliasing and sampling problems, the coordinates of non-zero histogram bins can slightly shift. To compare such histograms or more general sparse configurations of weighted points, consider using the #EMD function.

    Declaration

    Objective-C

    + (double)compareHist:(nonnull Mat *)H1
                       H2:(nonnull Mat *)H2
                   method:(HistCompMethods)method;

    Swift

    class func compareHist(H1: Mat, H2: Mat, method: HistCompMethods) -> Double

    Parameters

    H1

    First compared histogram.

    H2

    Second compared histogram of the same size as H1 .

    method

    Comparison method, see #HistCompMethods

  • Calculates a contour area.

    The function computes a contour area. Similarly to moments , the area is computed using the Green formula. Thus, the returned area and the number of non-zero pixels, if you draw the contour using #drawContours or #fillPoly , can be different. Also, the function will most certainly give a wrong results for contours with self-intersections.

    Example:

     vector<Point> contour;
     contour.push_back(Point2f(0, 0));
     contour.push_back(Point2f(10, 0));
     contour.push_back(Point2f(10, 10));
     contour.push_back(Point2f(5, 4));
    
     double area0 = contourArea(contour);
     vector<Point> approx;
     approxPolyDP(contour, approx, 5, true);
     double area1 = contourArea(approx);
    
     cout << "area0 =" << area0 << endl <<
             "area1 =" << area1 << endl <<
             "approx poly vertices" << approx.size() << endl;
    

    Declaration

    Objective-C

    + (double)contourArea:(nonnull Mat *)contour oriented:(BOOL)oriented;

    Swift

    class func contourArea(contour: Mat, oriented: Bool) -> Double

    Parameters

    contour

    Input vector of 2D points (contour vertices), stored in std::vector or Mat.

    oriented

    Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.

  • Calculates a contour area.

    The function computes a contour area. Similarly to moments , the area is computed using the Green formula. Thus, the returned area and the number of non-zero pixels, if you draw the contour using #drawContours or #fillPoly , can be different. Also, the function will most certainly give a wrong results for contours with self-intersections.

    Example:

     vector<Point> contour;
     contour.push_back(Point2f(0, 0));
     contour.push_back(Point2f(10, 0));
     contour.push_back(Point2f(10, 10));
     contour.push_back(Point2f(5, 4));
    
     double area0 = contourArea(contour);
     vector<Point> approx;
     approxPolyDP(contour, approx, 5, true);
     double area1 = contourArea(approx);
    
     cout << "area0 =" << area0 << endl <<
             "area1 =" << area1 << endl <<
             "approx poly vertices" << approx.size() << endl;
    

    Declaration

    Objective-C

    + (double)contourArea:(nonnull Mat *)contour;

    Swift

    class func contourArea(contour: Mat) -> Double

    Parameters

    contour

    Input vector of 2D points (contour vertices), stored in std::vector or Mat. depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.

  • Calculates the font-specific size to use to achieve a given height in pixels.

    See

    cv::putText

    Declaration

    Objective-C

    + (double)getFontScaleFromHeight:(int)fontFace
                         pixelHeight:(int)pixelHeight
                           thickness:(int)thickness;

    Swift

    class func getFontScaleFromHeight(fontFace: Int32, pixelHeight: Int32, thickness: Int32) -> Double

    Parameters

    fontFace

    Font to use, see cv::HersheyFonts.

    pixelHeight

    Pixel height to compute the fontScale for

    thickness

    Thickness of lines used to render the text.See putText for details.

    Return Value

    The fontSize to use for cv::putText

  • Calculates the font-specific size to use to achieve a given height in pixels.

    See

    cv::putText

    Declaration

    Objective-C

    + (double)getFontScaleFromHeight:(int)fontFace pixelHeight:(int)pixelHeight;

    Swift

    class func getFontScaleFromHeight(fontFace: Int32, pixelHeight: Int32) -> Double

    Parameters

    fontFace

    Font to use, see cv::HersheyFonts.

    pixelHeight

    Pixel height to compute the fontScale for

    Return Value

    The fontSize to use for cv::putText

  • Compares two shapes.

    The function compares two shapes. All three implemented methods use the Hu invariants (see #HuMoments)

    Declaration

    Objective-C

    + (double)matchShapes:(nonnull Mat *)contour1
                 contour2:(nonnull Mat *)contour2
                   method:(ShapeMatchModes)method
                parameter:(double)parameter;

    Swift

    class func matchShapes(contour1: Mat, contour2: Mat, method: ShapeMatchModes, parameter: Double) -> Double

    Parameters

    contour1

    First contour or grayscale image.

    contour2

    Second contour or grayscale image.

    method

    Comparison method, see #ShapeMatchModes

    parameter

    Method-specific parameter (not supported now).

  • Finds a triangle of minimum area enclosing a 2D point set and returns its area.

    The function finds a triangle of minimum area enclosing the given set of 2D points and returns its area. The output for a given 2D point set is shown in the image below. 2D points are depicted in red* and the enclosing triangle in yellow.

    Sample output of the minimum enclosing triangle function

    The implementation of the algorithm is based on O'Rourke’s CITE: ORourke86 and Klee and Laskowski’s CITE: KleeLaskowski85 papers. O'Rourke provides a

    \theta(n)
    algorithm for finding the minimal enclosing triangle of a 2D convex polygon with n vertices. Since the #minEnclosingTriangle function takes a 2D point set as input an additional preprocessing step of computing the convex hull of the 2D point set is required. The complexity of the #convexHull function is
    O(n log(n))
    which is higher than
    \theta(n)
    . Thus the overall complexity of the function is
    O(n log(n))
    .

    Declaration

    Objective-C

    + (double)minEnclosingTriangle:(nonnull Mat *)points
                          triangle:(nonnull Mat *)triangle;

    Swift

    class func minEnclosingTriangle(points: Mat, triangle: Mat) -> Double

    Parameters

    points

    Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector<> or Mat

    triangle

    Output vector of three 2D points defining the vertices of the triangle. The depth of the OutputArray must be CV_32F.

  • Performs a point-in-contour test.

    The function determines whether the point is inside a contour, outside, or lies on an edge (or coincides with a vertex). It returns positive (inside), negative (outside), or zero (on an edge) value, correspondingly. When measureDist=false , the return value is +1, -1, and 0, respectively. Otherwise, the return value is a signed distance between the point and the nearest contour edge.

    See below a sample output of the function where each image pixel is tested against the contour:

    sample output

    Declaration

    Objective-C

    + (double)pointPolygonTest:(nonnull NSArray<Point2f *> *)contour
                            pt:(nonnull Point2f *)pt
                   measureDist:(BOOL)measureDist;

    Swift

    class func pointPolygonTest(contour: [Point2f], pt: Point2f, measureDist: Bool) -> Double

    Parameters

    contour

    Input contour.

    pt

    Point tested against the contour.

    measureDist

    If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not.

  • Applies a fixed-level threshold to each array element.

    The function applies fixed-level thresholding to a multiple-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image ( #compare could be also used for this purpose) or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function. They are determined by type parameter.

    Also, the special values #THRESH_OTSU or #THRESH_TRIANGLE may be combined with one of the above values. In these cases, the function determines the optimal threshold value using the Otsu’s or Triangle algorithm and uses it instead of the specified thresh.

    Note

    Currently, the Otsu’s and Triangle methods are implemented only for 8-bit single-channel images.

    Declaration

    Objective-C

    + (double)threshold:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                 thresh:(double)thresh
                 maxval:(double)maxval
                   type:(ThresholdTypes)type;

    Swift

    class func threshold(src: Mat, dst: Mat, thresh: Double, maxval: Double, type: ThresholdTypes) -> Double

    Parameters

    src

    input array (multiple-channel, 8-bit or 32-bit floating point).

    dst

    output array of the same size and type and the same number of channels as src.

    thresh

    threshold value.

    maxval

    maximum value to use with the #THRESH_BINARY and #THRESH_BINARY_INV thresholding types.

    type

    thresholding type (see #ThresholdTypes).

    Return Value

    the computed threshold value if Otsu’s or Triangle methods used.

  • Finds intersection of two convex polygons

    Note

    intersectConvexConvex doesn’t confirm that both polygons are convex and will return invalid results if they aren’t.

    Declaration

    Objective-C

    + (float)intersectConvexConvex:(nonnull Mat *)_p1
                               _p2:(nonnull Mat *)_p2
                              _p12:(nonnull Mat *)_p12
                      handleNested:(BOOL)handleNested;

    Swift

    class func intersectConvexConvex(_p1: Mat, _p2: Mat, _p12: Mat, handleNested: Bool) -> Float

    Parameters

    _p1

    First polygon

    _p2

    Second polygon

    _p12

    Output polygon describing the intersecting area

    handleNested

    When true, an intersection is found if one of the polygons is fully enclosed in the other. When false, no intersection is found. If the polygons share a side or the vertex of one polygon lies on an edge of the other, they are not considered nested and an intersection will be found regardless of the value of handleNested.

    Return Value

    Absolute value of area of intersecting polygon

  • Finds intersection of two convex polygons

    Note

    intersectConvexConvex doesn’t confirm that both polygons are convex and will return invalid results if they aren’t.

    Declaration

    Objective-C

    + (float)intersectConvexConvex:(nonnull Mat *)_p1
                               _p2:(nonnull Mat *)_p2
                              _p12:(nonnull Mat *)_p12;

    Swift

    class func intersectConvexConvex(_p1: Mat, _p2: Mat, _p12: Mat) -> Float

    Parameters

    _p1

    First polygon

    _p2

    Second polygon

    _p12

    Output polygon describing the intersecting area When false, no intersection is found. If the polygons share a side or the vertex of one polygon lies on an edge of the other, they are not considered nested and an intersection will be found regardless of the value of handleNested.

    Return Value

    Absolute value of area of intersecting polygon

  • Computes the “minimal work” distance between two weighted point configurations.

    The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in CITE: RubnerSept98, CITE: Rubner2000 is multi-dimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.

    Declaration

    Objective-C

    + (float)EMD:(nonnull Mat *)signature1
        signature2:(nonnull Mat *)signature2
          distType:(DistanceTypes)distType
              cost:(nonnull Mat *)cost
              flow:(nonnull Mat *)flow;

    Swift

    class func wrapperEMD(signature1: Mat, signature2: Mat, distType: DistanceTypes, cost: Mat, flow: Mat) -> Float

    Parameters

    signature1

    First signature, a

    \texttt{size1}\times \texttt{dims}+1
    floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used. The weights must be non-negative and have at least one non-zero value.

    signature2

    Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra “dummy” point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.

    distType

    Used metric. See #DistanceTypes.

    cost

    User-defined

    \texttt{size1}\times \texttt{size2}
    cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.

    lowerBound

    Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You must* initialize *lowerBound . If the calculated distance between mass centers is greater or equal to *lowerBound (it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0.

    flow

    Resultant

    \texttt{size1} \times \texttt{size2}
    flow matrix:
    \texttt{flow}_{i,j}
    is a flow from
    i
    -th point of signature1 to
    j
    -th point of signature2 .

  • Computes the “minimal work” distance between two weighted point configurations.

    The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in CITE: RubnerSept98, CITE: Rubner2000 is multi-dimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.

    Declaration

    Objective-C

    + (float)EMD:(nonnull Mat *)signature1
        signature2:(nonnull Mat *)signature2
          distType:(DistanceTypes)distType
              cost:(nonnull Mat *)cost;

    Swift

    class func wrapperEMD(signature1: Mat, signature2: Mat, distType: DistanceTypes, cost: Mat) -> Float

    Parameters

    signature1

    First signature, a

    \texttt{size1}\times \texttt{dims}+1
    floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used. The weights must be non-negative and have at least one non-zero value.

    signature2

    Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra “dummy” point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.

    distType

    Used metric. See #DistanceTypes.

    cost

    User-defined

    \texttt{size1}\times \texttt{size2}
    cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.

    lowerBound

    Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You must* initialize *lowerBound . If the calculated distance between mass centers is greater or equal to *lowerBound (it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0. a flow from

    i
    -th point of signature1 to
    j
    -th point of signature2 .

  • Computes the “minimal work” distance between two weighted point configurations.

    The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations. One of the applications described in CITE: RubnerSept98, CITE: Rubner2000 is multi-dimensional histogram comparison for image retrieval. EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster. In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.

    Declaration

    Objective-C

    + (float)EMD:(nonnull Mat *)signature1
        signature2:(nonnull Mat *)signature2
          distType:(DistanceTypes)distType;

    Swift

    class func wrapperEMD(signature1: Mat, signature2: Mat, distType: DistanceTypes) -> Float

    Parameters

    signature1

    First signature, a

    \texttt{size1}\times \texttt{dims}+1
    floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used. The weights must be non-negative and have at least one non-zero value.

    signature2

    Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra “dummy” point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.

    distType

    Used metric. See #DistanceTypes. is used, lower boundary lowerBound cannot be calculated because it needs a metric function. signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column). You must* initialize *lowerBound . If the calculated distance between mass centers is greater or equal to *lowerBound (it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0. a flow from

    i
    -th point of signature1 to
    j
    -th point of signature2 .

  • computes the connected components labeled image of boolean image

    image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently Grana (BBDT) and Wu’s (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu’s algorithms if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.

    Declaration

    Objective-C

    + (int)connectedComponentsWithAlgorithm:(nonnull Mat *)image
                                     labels:(nonnull Mat *)labels
                               connectivity:(int)connectivity
                                      ltype:(int)ltype
                                    ccltype:(int)ccltype;

    Swift

    class func connectedComponents(image: Mat, labels: Mat, connectivity: Int32, ltype: Int32, ccltype: Int32) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    connectivity

    8 or 4 for 8-way or 4-way connectivity respectively

    ltype

    output image label type. Currently CV_32S and CV_16U are supported.

    ccltype

    connected components algorithm type (see the #ConnectedComponentsAlgorithmsTypes).

  • Declaration

    Objective-C

    + (int)connectedComponents:(nonnull Mat *)image
                        labels:(nonnull Mat *)labels
                  connectivity:(int)connectivity
                         ltype:(int)ltype;

    Swift

    class func connectedComponents(image: Mat, labels: Mat, connectivity: Int32, ltype: Int32) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    connectivity

    8 or 4 for 8-way or 4-way connectivity respectively

    ltype

    output image label type. Currently CV_32S and CV_16U are supported.

  • Declaration

    Objective-C

    + (int)connectedComponents:(nonnull Mat *)image
                        labels:(nonnull Mat *)labels
                  connectivity:(int)connectivity;

    Swift

    class func connectedComponents(image: Mat, labels: Mat, connectivity: Int32) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    connectivity

    8 or 4 for 8-way or 4-way connectivity respectively

  • Declaration

    Objective-C

    + (int)connectedComponents:(nonnull Mat *)image labels:(nonnull Mat *)labels;

    Swift

    class func connectedComponents(image: Mat, labels: Mat) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

  • computes the connected components labeled image of boolean image and also produces a statistics output for each label

    image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently Grana’s (BBDT) and Wu’s (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details. Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu’s algorithms (statistics included) if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.

    Declaration

    Objective-C

    + (int)
        connectedComponentsWithStatsWithAlgorithm:(nonnull Mat *)image
                                           labels:(nonnull Mat *)labels
                                            stats:(nonnull Mat *)stats
                                        centroids:(nonnull Mat *)centroids
                                     connectivity:(int)connectivity
                                            ltype:(int)ltype
                                          ccltype:
                                              (ConnectedComponentsAlgorithmsTypes)
                                                  ccltype;

    Swift

    class func connectedComponentsWithStats(image: Mat, labels: Mat, stats: Mat, centroids: Mat, connectivity: Int32, ltype: Int32, ccltype: ConnectedComponentsAlgorithmsTypes) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    stats

    statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.

    centroids

    centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.

    connectivity

    8 or 4 for 8-way or 4-way connectivity respectively

    ltype

    output image label type. Currently CV_32S and CV_16U are supported.

    ccltype

    connected components algorithm type (see #ConnectedComponentsAlgorithmsTypes).

  • Declaration

    Objective-C

    + (int)connectedComponentsWithStats:(nonnull Mat *)image
                                 labels:(nonnull Mat *)labels
                                  stats:(nonnull Mat *)stats
                              centroids:(nonnull Mat *)centroids
                           connectivity:(int)connectivity
                                  ltype:(int)ltype;

    Swift

    class func connectedComponentsWithStats(image: Mat, labels: Mat, stats: Mat, centroids: Mat, connectivity: Int32, ltype: Int32) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    stats

    statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.

    centroids

    centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.

    connectivity

    8 or 4 for 8-way or 4-way connectivity respectively

    ltype

    output image label type. Currently CV_32S and CV_16U are supported.

  • Declaration

    Objective-C

    + (int)connectedComponentsWithStats:(nonnull Mat *)image
                                 labels:(nonnull Mat *)labels
                                  stats:(nonnull Mat *)stats
                              centroids:(nonnull Mat *)centroids
                           connectivity:(int)connectivity;

    Swift

    class func connectedComponentsWithStats(image: Mat, labels: Mat, stats: Mat, centroids: Mat, connectivity: Int32) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    stats

    statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.

    centroids

    centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.

    connectivity

    8 or 4 for 8-way or 4-way connectivity respectively

  • Declaration

    Objective-C

    + (int)connectedComponentsWithStats:(nonnull Mat *)image
                                 labels:(nonnull Mat *)labels
                                  stats:(nonnull Mat *)stats
                              centroids:(nonnull Mat *)centroids;

    Swift

    class func connectedComponentsWithStats(image: Mat, labels: Mat, stats: Mat, centroids: Mat) -> Int32

    Parameters

    image

    the 8-bit single-channel image to be labeled

    labels

    destination labeled image

    stats

    statistics output for each label, including the background label. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes, selecting the statistic. The data type is CV_32S.

    centroids

    centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.

  • Fills a connected component with the given color.

    The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at

    (x,y)
    is considered to belong to the repainted domain if:

    • in case of a grayscale image and floating range

      \texttt{src} (x’,y’)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}

    • in case of a grayscale image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}

    • in case of a color image and floating range

      \texttt{src} (x’,y’)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,
      \texttt{src} (x’,y’)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _g
      and
      \texttt{src} (x’,y’)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _b

    • in case of a color image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _g
      and
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b

    where

    src(x’,y’)
    is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to:

    • Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
    • Color/brightness of the seed point in case of a fixed range.

    Use these functions to either mark a connected component with the specified color in-place, or build a mask and then extract the contour, or copy the region to another image, and so on.

    Note

    Since the mask is larger than the filled image, a pixel

    (x, y)
    in image corresponds to the pixel
    (x+1, y+1)
    in the mask .

    Declaration

    Objective-C

    + (int)floodFill:(nonnull Mat *)image
                mask:(nonnull Mat *)mask
           seedPoint:(nonnull Point2i *)seedPoint
              newVal:(nonnull Scalar *)newVal
                rect:(nonnull Rect2i *)rect
              loDiff:(nonnull Scalar *)loDiff
              upDiff:(nonnull Scalar *)upDiff
               flags:(int)flags;

    Swift

    class func floodFill(image: Mat, mask: Mat, seedPoint: Point2i, newVal: Scalar, rect: Rect2i, loDiff: Scalar, upDiff: Scalar, flags: Int32) -> Int32

    Parameters

    image

    Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

    mask

    Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.

    seedPoint

    Starting point.

    newVal

    New value of the repainted domain pixels.

    loDiff

    Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

    upDiff

    Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

    rect

    Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain.

    flags

    Operation flags. The first 8 bits contain a connectivity value. The default value of 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (8-16) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4 | ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bit-wise or (|), see #FloodFillFlags.

  • Fills a connected component with the given color.

    The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at

    (x,y)
    is considered to belong to the repainted domain if:

    • in case of a grayscale image and floating range

      \texttt{src} (x’,y’)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}

    • in case of a grayscale image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}

    • in case of a color image and floating range

      \texttt{src} (x’,y’)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,
      \texttt{src} (x’,y’)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _g
      and
      \texttt{src} (x’,y’)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _b

    • in case of a color image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _g
      and
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b

    where

    src(x’,y’)
    is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to:

    • Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
    • Color/brightness of the seed point in case of a fixed range.

    Use these functions to either mark a connected component with the specified color in-place, or build a mask and then extract the contour, or copy the region to another image, and so on.

    Note

    Since the mask is larger than the filled image, a pixel

    (x, y)
    in image corresponds to the pixel
    (x+1, y+1)
    in the mask .

    Declaration

    Objective-C

    + (int)floodFill:(nonnull Mat *)image
                mask:(nonnull Mat *)mask
           seedPoint:(nonnull Point2i *)seedPoint
              newVal:(nonnull Scalar *)newVal
                rect:(nonnull Rect2i *)rect
              loDiff:(nonnull Scalar *)loDiff
              upDiff:(nonnull Scalar *)upDiff;

    Swift

    class func floodFill(image: Mat, mask: Mat, seedPoint: Point2i, newVal: Scalar, rect: Rect2i, loDiff: Scalar, upDiff: Scalar) -> Int32

    Parameters

    image

    Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

    mask

    Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.

    seedPoint

    Starting point.

    newVal

    New value of the repainted domain pixels.

    loDiff

    Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

    upDiff

    Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

    rect

    Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (8-16) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4 | ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bit-wise or (|), see #FloodFillFlags.

  • Fills a connected component with the given color.

    The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at

    (x,y)
    is considered to belong to the repainted domain if:

    • in case of a grayscale image and floating range

      \texttt{src} (x’,y’)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}

    • in case of a grayscale image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}

    • in case of a color image and floating range

      \texttt{src} (x’,y’)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,
      \texttt{src} (x’,y’)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _g
      and
      \texttt{src} (x’,y’)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _b

    • in case of a color image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _g
      and
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b

    where

    src(x’,y’)
    is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to:

    • Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
    • Color/brightness of the seed point in case of a fixed range.

    Use these functions to either mark a connected component with the specified color in-place, or build a mask and then extract the contour, or copy the region to another image, and so on.

    Note

    Since the mask is larger than the filled image, a pixel

    (x, y)
    in image corresponds to the pixel
    (x+1, y+1)
    in the mask .

    Declaration

    Objective-C

    + (int)floodFill:(nonnull Mat *)image
                mask:(nonnull Mat *)mask
           seedPoint:(nonnull Point2i *)seedPoint
              newVal:(nonnull Scalar *)newVal
                rect:(nonnull Rect2i *)rect
              loDiff:(nonnull Scalar *)loDiff;

    Swift

    class func floodFill(image: Mat, mask: Mat, seedPoint: Point2i, newVal: Scalar, rect: Rect2i, loDiff: Scalar) -> Int32

    Parameters

    image

    Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

    mask

    Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.

    seedPoint

    Starting point.

    newVal

    New value of the repainted domain pixels.

    loDiff

    Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. one of its neighbors belonging to the component, or a seed pixel being added to the component.

    rect

    Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (8-16) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4 | ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bit-wise or (|), see #FloodFillFlags.

  • Fills a connected component with the given color.

    The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at

    (x,y)
    is considered to belong to the repainted domain if:

    • in case of a grayscale image and floating range

      \texttt{src} (x’,y’)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}

    • in case of a grayscale image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}

    • in case of a color image and floating range

      \texttt{src} (x’,y’)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,
      \texttt{src} (x’,y’)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _g
      and
      \texttt{src} (x’,y’)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _b

    • in case of a color image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _g
      and
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b

    where

    src(x’,y’)
    is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to:

    • Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
    • Color/brightness of the seed point in case of a fixed range.

    Use these functions to either mark a connected component with the specified color in-place, or build a mask and then extract the contour, or copy the region to another image, and so on.

    Note

    Since the mask is larger than the filled image, a pixel

    (x, y)
    in image corresponds to the pixel
    (x+1, y+1)
    in the mask .

    Declaration

    Objective-C

    + (int)floodFill:(nonnull Mat *)image
                mask:(nonnull Mat *)mask
           seedPoint:(nonnull Point2i *)seedPoint
              newVal:(nonnull Scalar *)newVal
                rect:(nonnull Rect2i *)rect;

    Swift

    class func floodFill(image: Mat, mask: Mat, seedPoint: Point2i, newVal: Scalar, rect: Rect2i) -> Int32

    Parameters

    image

    Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

    mask

    Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.

    seedPoint

    Starting point.

    newVal

    New value of the repainted domain pixels. one of its neighbors belonging to the component, or a seed pixel being added to the component. one of its neighbors belonging to the component, or a seed pixel being added to the component.

    rect

    Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (8-16) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4 | ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bit-wise or (|), see #FloodFillFlags.

  • Fills a connected component with the given color.

    The function cv::floodFill fills a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at

    (x,y)
    is considered to belong to the repainted domain if:

    • in case of a grayscale image and floating range

      \texttt{src} (x’,y’)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} (x’,y’)+ \texttt{upDiff}

    • in case of a grayscale image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)- \texttt{loDiff} \leq \texttt{src} (x,y) \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)+ \texttt{upDiff}

    • in case of a color image and floating range

      \texttt{src} (x’,y’)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} (x’,y’)_r+ \texttt{upDiff} _r,
      \texttt{src} (x’,y’)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} (x’,y’)_g+ \texttt{upDiff} _g
      and
      \texttt{src} (x’,y’)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} (x’,y’)_b+ \texttt{upDiff} _b

    • in case of a color image and fixed range

      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r- \texttt{loDiff} _r \leq \texttt{src} (x,y)_r \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_r+ \texttt{upDiff} _r,
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g- \texttt{loDiff} _g \leq \texttt{src} (x,y)_g \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_g+ \texttt{upDiff} _g
      and
      \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b- \texttt{loDiff} _b \leq \texttt{src} (x,y)_b \leq \texttt{src} ( \texttt{seedPoint} .x, \texttt{seedPoint} .y)_b+ \texttt{upDiff} _b

    where

    src(x’,y’)
    is the value of one of pixel neighbors that is already known to belong to the component. That is, to be added to the connected component, a color/brightness of the pixel should be close enough to:

    • Color/brightness of one of its neighbors that already belong to the connected component in case of a floating range.
    • Color/brightness of the seed point in case of a fixed range.

    Use these functions to either mark a connected component with the specified color in-place, or build a mask and then extract the contour, or copy the region to another image, and so on.

    Note

    Since the mask is larger than the filled image, a pixel

    (x, y)
    in image corresponds to the pixel
    (x+1, y+1)
    in the mask .

    Declaration

    Objective-C

    + (int)floodFill:(nonnull Mat *)image
                mask:(nonnull Mat *)mask
           seedPoint:(nonnull Point2i *)seedPoint
              newVal:(nonnull Scalar *)newVal;

    Swift

    class func floodFill(image: Mat, mask: Mat, seedPoint: Point2i, newVal: Scalar) -> Int32

    Parameters

    image

    Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the #FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

    mask

    Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. Additionally, the function fills the border of the mask with ones to simplify internal processing. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap.

    seedPoint

    Starting point.

    newVal

    New value of the repainted domain pixels. one of its neighbors belonging to the component, or a seed pixel being added to the component. one of its neighbors belonging to the component, or a seed pixel being added to the component. repainted domain. 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered. The next 8 bits (8-16) contain a value between 1 and 255 with which to fill the mask (the default value is 1). For example, 4 | ( 255 << 8 ) will consider 4 nearest neighbours and fill the mask with a value of 255. The following additional options occupy higher bits and therefore may be further combined with the connectivity and mask fill values using bit-wise or (|), see #FloodFillFlags.

  • Finds out if there is any intersection between two rotated rectangles.

    If there is then the vertices of the intersecting region are returned as well.

    Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function.

    intersection examples

    Declaration

    Objective-C

    + (int)rotatedRectangleIntersection:(nonnull RotatedRect *)rect1
                                  rect2:(nonnull RotatedRect *)rect2
                     intersectingRegion:(nonnull Mat *)intersectingRegion;

    Swift

    class func rotatedRectangleIntersection(rect1: RotatedRect, rect2: RotatedRect, intersectingRegion: Mat) -> Int32

    Parameters

    rect1

    First rectangle

    rect2

    Second rectangle

    intersectingRegion

    The output array of the vertices of the intersecting region. It returns at most 8 vertices. Stored as std::vector<cv::Point2f> or cv::Mat as Mx1 of type CV_32FC2.

    Return Value

    One of #RectanglesIntersectTypes

  • \overload

    Finds edges in an image using the Canny algorithm with custom image gradient.

    Declaration

    Objective-C

    + (void)Canny:(nonnull Mat *)dx
                dy:(nonnull Mat *)dy
             edges:(nonnull Mat *)edges
        threshold1:(double)threshold1
        threshold2:(double)threshold2
        L2gradient:(BOOL)L2gradient;

    Swift

    class func Canny(dx: Mat, dy: Mat, edges: Mat, threshold1: Double, threshold2: Double, L2gradient: Bool)

    Parameters

    dx

    16-bit x derivative of input image (CV_16SC1 or CV_16SC3).

    dy

    16-bit y derivative of input image (same type as dx).

    edges

    output edge map; single channels 8-bit image, which has the same size as image .

    threshold1

    first threshold for the hysteresis procedure.

    threshold2

    second threshold for the hysteresis procedure.

    L2gradient

    a flag, indicating whether a more accurate

    L_2
    norm
    =\sqrt{(dI/dx)^2 + (dI/dy)^2}
    should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default
    L_1
    norm
    =|dI/dx|+|dI/dy|
    is enough ( L2gradient=false ).

  • \overload

    Finds edges in an image using the Canny algorithm with custom image gradient.

    Declaration

    Objective-C

    + (void)Canny:(nonnull Mat *)dx
                dy:(nonnull Mat *)dy
             edges:(nonnull Mat *)edges
        threshold1:(double)threshold1
        threshold2:(double)threshold2;

    Swift

    class func Canny(dx: Mat, dy: Mat, edges: Mat, threshold1: Double, threshold2: Double)

    Parameters

    dx

    16-bit x derivative of input image (CV_16SC1 or CV_16SC3).

    dy

    16-bit y derivative of input image (same type as dx).

    edges

    output edge map; single channels 8-bit image, which has the same size as image .

    threshold1

    first threshold for the hysteresis procedure.

    threshold2

    second threshold for the hysteresis procedure.

    =\sqrt{(dI/dx)^2 + (dI/dy)^2}
    should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default
    L_1
    norm
    =|dI/dx|+|dI/dy|
    is enough ( L2gradient=false ).

  • Finds edges in an image using the Canny algorithm CITE: Canny86 .

    The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector

    Declaration

    Objective-C

    + (void)Canny:(nonnull Mat *)image
               edges:(nonnull Mat *)edges
          threshold1:(double)threshold1
          threshold2:(double)threshold2
        apertureSize:(int)apertureSize
          L2gradient:(BOOL)L2gradient;

    Swift

    class func Canny(image: Mat, edges: Mat, threshold1: Double, threshold2: Double, apertureSize: Int32, L2gradient: Bool)

    Parameters

    image

    8-bit input image.

    edges

    output edge map; single channels 8-bit image, which has the same size as image .

    threshold1

    first threshold for the hysteresis procedure.

    threshold2

    second threshold for the hysteresis procedure.

    apertureSize

    aperture size for the Sobel operator.

    L2gradient

    a flag, indicating whether a more accurate

    L_2
    norm
    =\sqrt{(dI/dx)^2 + (dI/dy)^2}
    should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default
    L_1
    norm
    =|dI/dx|+|dI/dy|
    is enough ( L2gradient=false ).

  • Finds edges in an image using the Canny algorithm CITE: Canny86 .

    The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector

    Declaration

    Objective-C

    + (void)Canny:(nonnull Mat *)image
               edges:(nonnull Mat *)edges
          threshold1:(double)threshold1
          threshold2:(double)threshold2
        apertureSize:(int)apertureSize;

    Swift

    class func Canny(image: Mat, edges: Mat, threshold1: Double, threshold2: Double, apertureSize: Int32)

    Parameters

    image

    8-bit input image.

    edges

    output edge map; single channels 8-bit image, which has the same size as image .

    threshold1

    first threshold for the hysteresis procedure.

    threshold2

    second threshold for the hysteresis procedure.

    apertureSize

    aperture size for the Sobel operator.

    =\sqrt{(dI/dx)^2 + (dI/dy)^2}
    should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default
    L_1
    norm
    =|dI/dx|+|dI/dy|
    is enough ( L2gradient=false ).

  • Finds edges in an image using the Canny algorithm CITE: Canny86 .

    The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector

    Declaration

    Objective-C

    + (void)Canny:(nonnull Mat *)image
             edges:(nonnull Mat *)edges
        threshold1:(double)threshold1
        threshold2:(double)threshold2;

    Swift

    class func Canny(image: Mat, edges: Mat, threshold1: Double, threshold2: Double)

    Parameters

    image

    8-bit input image.

    edges

    output edge map; single channels 8-bit image, which has the same size as image .

    threshold1

    first threshold for the hysteresis procedure.

    threshold2

    second threshold for the hysteresis procedure.

    =\sqrt{(dI/dx)^2 + (dI/dy)^2}
    should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default
    L_1
    norm
    =|dI/dx|+|dI/dy|
    is enough ( L2gradient=false ).

  • Declaration

    Objective-C

    + (void)GaussianBlur:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                   ksize:(nonnull Size2i *)ksize
                  sigmaX:(double)sigmaX
                  sigmaY:(double)sigmaY
              borderType:(BorderTypes)borderType;

    Swift

    class func GaussianBlur(src: Mat, dst: Mat, ksize: Size2i, sigmaX: Double, sigmaY: Double, borderType: BorderTypes)

    Parameters

    src

    input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    ksize

    Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma.

    sigmaX

    Gaussian kernel standard deviation in X direction.

    sigmaY

    Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see #getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Declaration

    Objective-C

    + (void)GaussianBlur:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                   ksize:(nonnull Size2i *)ksize
                  sigmaX:(double)sigmaX
                  sigmaY:(double)sigmaY;

    Swift

    class func GaussianBlur(src: Mat, dst: Mat, ksize: Size2i, sigmaX: Double, sigmaY: Double)

    Parameters

    src

    input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    ksize

    Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma.

    sigmaX

    Gaussian kernel standard deviation in X direction.

    sigmaY

    Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see #getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

  • Declaration

    Objective-C

    + (void)GaussianBlur:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                   ksize:(nonnull Size2i *)ksize
                  sigmaX:(double)sigmaX;

    Swift

    class func GaussianBlur(src: Mat, dst: Mat, ksize: Size2i, sigmaX: Double)

    Parameters

    src

    input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    ksize

    Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma.

    sigmaX

    Gaussian kernel standard deviation in X direction. equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see #getGaussianKernel for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

  • Finds circles in a grayscale image using the Hough transform.

    The function finds circles in a grayscale image using a modification of the Hough transform.

    Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp

    Note

    Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

    It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

    Declaration

    Objective-C

    + (void)HoughCircles:(nonnull Mat *)image
                 circles:(nonnull Mat *)circles
                  method:(HoughModes)method
                      dp:(double)dp
                 minDist:(double)minDist
                  param1:(double)param1
                  param2:(double)param2
               minRadius:(int)minRadius
               maxRadius:(int)maxRadius;

    Swift

    class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double, param2: Double, minRadius: Int32, maxRadius: Int32)

    Parameters

    image

    8-bit, single-channel, grayscale input image.

    circles

    Output vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector

    (x, y, radius)
    or
    (x, y, radius, votes)
    .

    method

    Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.

    dp

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.

    minDist

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

    param1

    First method-specific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.

    param2

    Second method-specific parameter. In case of #HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles.

    minRadius

    Minimum circle radius.

    maxRadius

    Maximum circle radius. If <= 0, uses the maximum image dimension. If < 0, #HOUGH_GRADIENT returns centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

  • Finds circles in a grayscale image using the Hough transform.

    The function finds circles in a grayscale image using a modification of the Hough transform.

    Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp

    Note

    Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

    It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

    Declaration

    Objective-C

    + (void)HoughCircles:(nonnull Mat *)image
                 circles:(nonnull Mat *)circles
                  method:(HoughModes)method
                      dp:(double)dp
                 minDist:(double)minDist
                  param1:(double)param1
                  param2:(double)param2
               minRadius:(int)minRadius;

    Swift

    class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double, param2: Double, minRadius: Int32)

    Parameters

    image

    8-bit, single-channel, grayscale input image.

    circles

    Output vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector

    (x, y, radius)
    or
    (x, y, radius, votes)
    .

    method

    Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.

    dp

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.

    minDist

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

    param1

    First method-specific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.

    param2

    Second method-specific parameter. In case of #HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles.

    minRadius

    Minimum circle radius. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

  • Finds circles in a grayscale image using the Hough transform.

    The function finds circles in a grayscale image using a modification of the Hough transform.

    Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp

    Note

    Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

    It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

    Declaration

    Objective-C

    + (void)HoughCircles:(nonnull Mat *)image
                 circles:(nonnull Mat *)circles
                  method:(HoughModes)method
                      dp:(double)dp
                 minDist:(double)minDist
                  param1:(double)param1
                  param2:(double)param2;

    Swift

    class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double, param2: Double)

    Parameters

    image

    8-bit, single-channel, grayscale input image.

    circles

    Output vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector

    (x, y, radius)
    or
    (x, y, radius, votes)
    .

    method

    Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.

    dp

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.

    minDist

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

    param1

    First method-specific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.

    param2

    Second method-specific parameter. In case of #HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

  • Finds circles in a grayscale image using the Hough transform.

    The function finds circles in a grayscale image using a modification of the Hough transform.

    Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp

    Note

    Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

    It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

    Declaration

    Objective-C

    + (void)HoughCircles:(nonnull Mat *)image
                 circles:(nonnull Mat *)circles
                  method:(HoughModes)method
                      dp:(double)dp
                 minDist:(double)minDist
                  param1:(double)param1;

    Swift

    class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double, param1: Double)

    Parameters

    image

    8-bit, single-channel, grayscale input image.

    circles

    Output vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector

    (x, y, radius)
    or
    (x, y, radius, votes)
    .

    method

    Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.

    dp

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.

    minDist

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

    param1

    First method-specific parameter. In case of #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images. accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

  • Finds circles in a grayscale image using the Hough transform.

    The function finds circles in a grayscale image using a modification of the Hough transform.

    Example: : INCLUDE: snippets/imgproc_HoughLinesCircles.cpp

    Note

    Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of #HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

    It also helps to smooth image a bit unless it’s already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

    Declaration

    Objective-C

    + (void)HoughCircles:(nonnull Mat *)image
                 circles:(nonnull Mat *)circles
                  method:(HoughModes)method
                      dp:(double)dp
                 minDist:(double)minDist;

    Swift

    class func HoughCircles(image: Mat, circles: Mat, method: HoughModes, dp: Double, minDist: Double)

    Parameters

    image

    8-bit, single-channel, grayscale input image.

    circles

    Output vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector

    (x, y, radius)
    or
    (x, y, radius, votes)
    .

    method

    Detection method, see #HoughModes. The available methods are #HOUGH_GRADIENT and #HOUGH_GRADIENT_ALT.

    dp

    Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For #HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.

    minDist

    Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed. it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that #HOUGH_GRADIENT_ALT uses #Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images. accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of #HOUGH_GRADIENT_ALT algorithm, this is the circle “perfectness” measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles. centers without finding the radius. #HOUGH_GRADIENT_ALT always computes circle radiuses.

  • Finds lines in a binary image using the standard Hough transform.

    The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

    Declaration

    Objective-C

    + (void)HoughLines:(nonnull Mat *)image
                 lines:(nonnull Mat *)lines
                   rho:(double)rho
                 theta:(double)theta
             threshold:(int)threshold
                   srn:(double)srn
                   stn:(double)stn
             min_theta:(double)min_theta
             max_theta:(double)max_theta;

    Swift

    class func HoughLines(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32, srn: Double, stn: Double, min_theta: Double, max_theta: Double)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 2 or 3 element vector

    (\rho, \theta)
    or
    (\rho, \theta, \textrm{votes})
    .
    \rho
    is the distance from the coordinate origin
    (0,0)
    (top-left corner of the image).
    \theta
    is the line rotation angle in radians (
    0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}
    ).
    \textrm{votes}
    is the value of accumulator.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

    srn

    For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.

    stn

    For the multi-scale Hough transform, it is a divisor for the distance resolution theta.

    min_theta

    For standard and multi-scale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta.

    max_theta

    For standard and multi-scale Hough transform, maximum angle to check for lines. Must fall between min_theta and CV_PI.

  • Finds lines in a binary image using the standard Hough transform.

    The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

    Declaration

    Objective-C

    + (void)HoughLines:(nonnull Mat *)image
                 lines:(nonnull Mat *)lines
                   rho:(double)rho
                 theta:(double)theta
             threshold:(int)threshold
                   srn:(double)srn
                   stn:(double)stn
             min_theta:(double)min_theta;

    Swift

    class func HoughLines(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32, srn: Double, stn: Double, min_theta: Double)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 2 or 3 element vector

    (\rho, \theta)
    or
    (\rho, \theta, \textrm{votes})
    .
    \rho
    is the distance from the coordinate origin
    (0,0)
    (top-left corner of the image).
    \theta
    is the line rotation angle in radians (
    0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}
    ).
    \textrm{votes}
    is the value of accumulator.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

    srn

    For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.

    stn

    For the multi-scale Hough transform, it is a divisor for the distance resolution theta.

    min_theta

    For standard and multi-scale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

  • Finds lines in a binary image using the standard Hough transform.

    The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

    Declaration

    Objective-C

    + (void)HoughLines:(nonnull Mat *)image
                 lines:(nonnull Mat *)lines
                   rho:(double)rho
                 theta:(double)theta
             threshold:(int)threshold
                   srn:(double)srn
                   stn:(double)stn;

    Swift

    class func HoughLines(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32, srn: Double, stn: Double)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 2 or 3 element vector

    (\rho, \theta)
    or
    (\rho, \theta, \textrm{votes})
    .
    \rho
    is the distance from the coordinate origin
    (0,0)
    (top-left corner of the image).
    \theta
    is the line rotation angle in radians (
    0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}
    ).
    \textrm{votes}
    is the value of accumulator.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

    srn

    For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.

    stn

    For the multi-scale Hough transform, it is a divisor for the distance resolution theta. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

  • Finds lines in a binary image using the standard Hough transform.

    The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

    Declaration

    Objective-C

    + (void)HoughLines:(nonnull Mat *)image
                 lines:(nonnull Mat *)lines
                   rho:(double)rho
                 theta:(double)theta
             threshold:(int)threshold
                   srn:(double)srn;

    Swift

    class func HoughLines(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32, srn: Double)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 2 or 3 element vector

    (\rho, \theta)
    or
    (\rho, \theta, \textrm{votes})
    .
    \rho
    is the distance from the coordinate origin
    (0,0)
    (top-left corner of the image).
    \theta
    is the line rotation angle in radians (
    0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}
    ).
    \textrm{votes}
    is the value of accumulator.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

    srn

    For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

  • Finds lines in a binary image using the standard Hough transform.

    The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

    Declaration

    Objective-C

    + (void)HoughLines:(nonnull Mat *)image
                 lines:(nonnull Mat *)lines
                   rho:(double)rho
                 theta:(double)theta
             threshold:(int)threshold;

    Swift

    class func HoughLines(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 2 or 3 element vector

    (\rho, \theta)
    or
    (\rho, \theta, \textrm{votes})
    .
    \rho
    is the distance from the coordinate origin
    (0,0)
    (top-left corner of the image).
    \theta
    is the line rotation angle in radians (
    0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}
    ).
    \textrm{votes}
    is the value of accumulator.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ). The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive. Must fall between 0 and max_theta. Must fall between min_theta and CV_PI.

  • Finds line segments in a binary image using the probabilistic Hough transform.

    The function implements the probabilistic Hough transform algorithm for line detection, described in CITE: Matas00

    See the line detection example below: INCLUDE: snippets/imgproc_HoughLinesP.cpp This is a sample picture the function parameters have been tuned for:

    image

    And this is the output of the above program in case of the probabilistic Hough transform:

    image

    Declaration

    Objective-C

    + (void)HoughLinesP:(nonnull Mat *)image
                  lines:(nonnull Mat *)lines
                    rho:(double)rho
                  theta:(double)theta
              threshold:(int)threshold
          minLineLength:(double)minLineLength
             maxLineGap:(double)maxLineGap;

    Swift

    class func HoughLinesP(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32, minLineLength: Double, maxLineGap: Double)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 4-element vector

    (x_1, y_1, x_2, y_2)
    , where
    (x_1,y_1)
    and
    (x_2, y_2)
    are the ending points of each detected line segment.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

    minLineLength

    Minimum line length. Line segments shorter than that are rejected.

    maxLineGap

    Maximum allowed gap between points on the same line to link them.

  • Finds line segments in a binary image using the probabilistic Hough transform.

    The function implements the probabilistic Hough transform algorithm for line detection, described in CITE: Matas00

    See the line detection example below: INCLUDE: snippets/imgproc_HoughLinesP.cpp This is a sample picture the function parameters have been tuned for:

    image

    And this is the output of the above program in case of the probabilistic Hough transform:

    image

    Declaration

    Objective-C

    + (void)HoughLinesP:(nonnull Mat *)image
                  lines:(nonnull Mat *)lines
                    rho:(double)rho
                  theta:(double)theta
              threshold:(int)threshold
          minLineLength:(double)minLineLength;

    Swift

    class func HoughLinesP(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32, minLineLength: Double)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 4-element vector

    (x_1, y_1, x_2, y_2)
    , where
    (x_1,y_1)
    and
    (x_2, y_2)
    are the ending points of each detected line segment.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

    minLineLength

    Minimum line length. Line segments shorter than that are rejected.

  • Finds line segments in a binary image using the probabilistic Hough transform.

    The function implements the probabilistic Hough transform algorithm for line detection, described in CITE: Matas00

    See the line detection example below: INCLUDE: snippets/imgproc_HoughLinesP.cpp This is a sample picture the function parameters have been tuned for:

    image

    And this is the output of the above program in case of the probabilistic Hough transform:

    image

    Declaration

    Objective-C

    + (void)HoughLinesP:(nonnull Mat *)image
                  lines:(nonnull Mat *)lines
                    rho:(double)rho
                  theta:(double)theta
              threshold:(int)threshold;

    Swift

    class func HoughLinesP(image: Mat, lines: Mat, rho: Double, theta: Double, threshold: Int32)

    Parameters

    image

    8-bit, single-channel binary source image. The image may be modified by the function.

    lines

    Output vector of lines. Each line is represented by a 4-element vector

    (x_1, y_1, x_2, y_2)
    , where
    (x_1,y_1)
    and
    (x_2, y_2)
    are the ending points of each detected line segment.

    rho

    Distance resolution of the accumulator in pixels.

    theta

    Angle resolution of the accumulator in radians.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    ).

  • Finds lines in a set of points using the standard Hough transform.

    The function finds lines in a set of points using a modification of the Hough transform. INCLUDE: snippets/imgproc_HoughLinesPointSet.cpp

    Declaration

    Objective-C

    + (void)HoughLinesPointSet:(nonnull Mat *)_point
                        _lines:(nonnull Mat *)_lines
                     lines_max:(int)lines_max
                     threshold:(int)threshold
                       min_rho:(double)min_rho
                       max_rho:(double)max_rho
                      rho_step:(double)rho_step
                     min_theta:(double)min_theta
                     max_theta:(double)max_theta
                    theta_step:(double)theta_step;

    Swift

    class func HoughLinesPointSet(_point: Mat, _lines: Mat, lines_max: Int32, threshold: Int32, min_rho: Double, max_rho: Double, rho_step: Double, min_theta: Double, max_theta: Double, theta_step: Double)

    Parameters

    _point

    Input vector of points. Each vector must be encoded as a Point vector

    (x,y)
    . Type must be CV_32FC2 or CV_32SC2.

    _lines

    Output vector of found lines. Each vector is encoded as a vector

    (votes, rho, theta)
    . The larger the value of ‘votes’, the higher the reliability of the Hough line.

    lines_max

    Max count of hough lines.

    threshold

    Accumulator threshold parameter. Only those lines are returned that get enough votes (

    >\texttt{threshold}
    )

    min_rho

    Minimum Distance value of the accumulator in pixels.

    max_rho

    Maximum Distance value of the accumulator in pixels.

    rho_step

    Distance resolution of the accumulator in pixels.

    min_theta

    Minimum angle value of the accumulator in radians.

    max_theta

    Maximum angle value of the accumulator in radians.

    theta_step

    Angle resolution of the accumulator in radians.

  • Declaration

    Objective-C

    + (void)HuMoments:(Moments*)m hu:(Mat*)hu NS_SWIFT_NAME(HuMoments(m:hu:));

    Swift

    class func HuMoments(m: Moments, hu: Mat)
  • Calculates the Laplacian of an image.

    The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:

    \texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}

    This is done when ksize > 1. When ksize == 1, the Laplacian is computed by filtering the image with the following

    3 \times 3
    aperture:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}

    Declaration

    Objective-C

    + (void)Laplacian:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(int)ksize
                scale:(double)scale
                delta:(double)delta
           borderType:(BorderTypes)borderType;

    Swift

    class func Laplacian(src: Mat, dst: Mat, ddepth: Int32, ksize: Int32, scale: Double, delta: Double, borderType: BorderTypes)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Desired depth of the destination image.

    ksize

    Aperture size used to compute the second-derivative filters. See #getDerivKernels for details. The size must be positive and odd.

    scale

    Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See #getDerivKernels for details.

    delta

    Optional delta value that is added to the results prior to storing them in dst .

    borderType

    Pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Calculates the Laplacian of an image.

    The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:

    \texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}

    This is done when ksize > 1. When ksize == 1, the Laplacian is computed by filtering the image with the following

    3 \times 3
    aperture:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}

    Declaration

    Objective-C

    + (void)Laplacian:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(int)ksize
                scale:(double)scale
                delta:(double)delta;

    Swift

    class func Laplacian(src: Mat, dst: Mat, ddepth: Int32, ksize: Int32, scale: Double, delta: Double)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Desired depth of the destination image.

    ksize

    Aperture size used to compute the second-derivative filters. See #getDerivKernels for details. The size must be positive and odd.

    scale

    Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See #getDerivKernels for details.

    delta

    Optional delta value that is added to the results prior to storing them in dst .

  • Calculates the Laplacian of an image.

    The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:

    \texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}

    This is done when ksize > 1. When ksize == 1, the Laplacian is computed by filtering the image with the following

    3 \times 3
    aperture:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}

    Declaration

    Objective-C

    + (void)Laplacian:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(int)ksize
                scale:(double)scale;

    Swift

    class func Laplacian(src: Mat, dst: Mat, ddepth: Int32, ksize: Int32, scale: Double)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Desired depth of the destination image.

    ksize

    Aperture size used to compute the second-derivative filters. See #getDerivKernels for details. The size must be positive and odd.

    scale

    Optional scale factor for the computed Laplacian values. By default, no scaling is applied. See #getDerivKernels for details.

  • Calculates the Laplacian of an image.

    The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:

    \texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}

    This is done when ksize > 1. When ksize == 1, the Laplacian is computed by filtering the image with the following

    3 \times 3
    aperture:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}

    Declaration

    Objective-C

    + (void)Laplacian:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(int)ksize;

    Swift

    class func Laplacian(src: Mat, dst: Mat, ddepth: Int32, ksize: Int32)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Desired depth of the destination image.

    ksize

    Aperture size used to compute the second-derivative filters. See #getDerivKernels for details. The size must be positive and odd. applied. See #getDerivKernels for details.

  • Calculates the Laplacian of an image.

    The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:

    \texttt{dst} = \Delta \texttt{src} = \frac{\partial^2 \texttt{src}}{\partial x^2} + \frac{\partial^2 \texttt{src}}{\partial y^2}

    This is done when ksize > 1. When ksize == 1, the Laplacian is computed by filtering the image with the following

    3 \times 3
    aperture:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}

    Declaration

    Objective-C

    + (void)Laplacian:(nonnull Mat *)src dst:(nonnull Mat *)dst ddepth:(int)ddepth;

    Swift

    class func Laplacian(src: Mat, dst: Mat, ddepth: Int32)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Desired depth of the destination image. details. The size must be positive and odd. applied. See #getDerivKernels for details.

  • Calculates the first x- or y- image derivative using Scharr operator.

    The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

    \texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}

    is equivalent to

    \texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .

    See

    cartToPolar

    Declaration

    Objective-C

    + (void)Scharr:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            ddepth:(int)ddepth
                dx:(int)dx
                dy:(int)dy
             scale:(double)scale
             delta:(double)delta
        borderType:(BorderTypes)borderType;

    Swift

    class func Scharr(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, scale: Double, delta: Double, borderType: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    output image depth, see REF: filter_depths “combinations”

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    scale

    optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

    delta

    optional delta value that is added to the results prior to storing them in dst.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Calculates the first x- or y- image derivative using Scharr operator.

    The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

    \texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}

    is equivalent to

    \texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .

    See

    cartToPolar

    Declaration

    Objective-C

    + (void)Scharr:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            ddepth:(int)ddepth
                dx:(int)dx
                dy:(int)dy
             scale:(double)scale
             delta:(double)delta;

    Swift

    class func Scharr(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, scale: Double, delta: Double)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    output image depth, see REF: filter_depths “combinations”

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    scale

    optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

    delta

    optional delta value that is added to the results prior to storing them in dst.

  • Calculates the first x- or y- image derivative using Scharr operator.

    The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

    \texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}

    is equivalent to

    \texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .

    See

    cartToPolar

    Declaration

    Objective-C

    + (void)Scharr:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            ddepth:(int)ddepth
                dx:(int)dx
                dy:(int)dy
             scale:(double)scale;

    Swift

    class func Scharr(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, scale: Double)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    output image depth, see REF: filter_depths “combinations”

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    scale

    optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

  • Calculates the first x- or y- image derivative using Scharr operator.

    The function computes the first x- or y- spatial image derivative using the Scharr operator. The call

    \texttt{Scharr(src, dst, ddepth, dx, dy, scale, delta, borderType)}

    is equivalent to

    \texttt{Sobel(src, dst, ddepth, dx, dy, FILTER\_SCHARR, scale, delta, borderType)} .

    See

    cartToPolar

    Declaration

    Objective-C

    + (void)Scharr:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            ddepth:(int)ddepth
                dx:(int)dx
                dy:(int)dy;

    Swift

    class func Scharr(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    output image depth, see REF: filter_depths “combinations”

    dx

    order of the derivative x.

    dy

    order of the derivative y. applied (see #getDerivKernels for details).

  • Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

    In all cases except one, the

    \texttt{ksize} \times \texttt{ksize}
    separable kernel is used to calculate the derivative. When
    \texttt{ksize = 1}
    , the
    3 \times 1
    or
    1 \times 3
    kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

    There is also the special value ksize = #FILTER_SCHARR (-1) that corresponds to the

    3\times3
    Scharr filter that may give more accurate results than the
    3\times3
    Sobel. The Scharr aperture is

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}

    for the x-derivative, or transposed for the y-derivative.

    The function calculates an image derivative by convolving the image with the appropriate kernel:

    \texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}

    The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x- or y- image derivative. The first case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}

    The second case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}

    Declaration

    Objective-C

    + (void)Sobel:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            ddepth:(int)ddepth
                dx:(int)dx
                dy:(int)dy
             ksize:(int)ksize
             scale:(double)scale
             delta:(double)delta
        borderType:(BorderTypes)borderType;

    Swift

    class func Sobel(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, ksize: Int32, scale: Double, delta: Double, borderType: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src .

    ddepth

    output image depth, see REF: filter_depths “combinations”; in the case of 8-bit input images it will result in truncated derivatives.

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    ksize

    size of the extended Sobel kernel; it must be 1, 3, 5, or 7.

    scale

    optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

    delta

    optional delta value that is added to the results prior to storing them in dst.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

    In all cases except one, the

    \texttt{ksize} \times \texttt{ksize}
    separable kernel is used to calculate the derivative. When
    \texttt{ksize = 1}
    , the
    3 \times 1
    or
    1 \times 3
    kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

    There is also the special value ksize = #FILTER_SCHARR (-1) that corresponds to the

    3\times3
    Scharr filter that may give more accurate results than the
    3\times3
    Sobel. The Scharr aperture is

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}

    for the x-derivative, or transposed for the y-derivative.

    The function calculates an image derivative by convolving the image with the appropriate kernel:

    \texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}

    The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x- or y- image derivative. The first case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}

    The second case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}

    Declaration

    Objective-C

    + (void)Sobel:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
           ddepth:(int)ddepth
               dx:(int)dx
               dy:(int)dy
            ksize:(int)ksize
            scale:(double)scale
            delta:(double)delta;

    Swift

    class func Sobel(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, ksize: Int32, scale: Double, delta: Double)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src .

    ddepth

    output image depth, see REF: filter_depths “combinations”; in the case of 8-bit input images it will result in truncated derivatives.

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    ksize

    size of the extended Sobel kernel; it must be 1, 3, 5, or 7.

    scale

    optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

    delta

    optional delta value that is added to the results prior to storing them in dst.

  • Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

    In all cases except one, the

    \texttt{ksize} \times \texttt{ksize}
    separable kernel is used to calculate the derivative. When
    \texttt{ksize = 1}
    , the
    3 \times 1
    or
    1 \times 3
    kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

    There is also the special value ksize = #FILTER_SCHARR (-1) that corresponds to the

    3\times3
    Scharr filter that may give more accurate results than the
    3\times3
    Sobel. The Scharr aperture is

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}

    for the x-derivative, or transposed for the y-derivative.

    The function calculates an image derivative by convolving the image with the appropriate kernel:

    \texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}

    The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x- or y- image derivative. The first case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}

    The second case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}

    Declaration

    Objective-C

    + (void)Sobel:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
           ddepth:(int)ddepth
               dx:(int)dx
               dy:(int)dy
            ksize:(int)ksize
            scale:(double)scale;

    Swift

    class func Sobel(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, ksize: Int32, scale: Double)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src .

    ddepth

    output image depth, see REF: filter_depths “combinations”; in the case of 8-bit input images it will result in truncated derivatives.

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    ksize

    size of the extended Sobel kernel; it must be 1, 3, 5, or 7.

    scale

    optional scale factor for the computed derivative values; by default, no scaling is applied (see #getDerivKernels for details).

  • Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

    In all cases except one, the

    \texttt{ksize} \times \texttt{ksize}
    separable kernel is used to calculate the derivative. When
    \texttt{ksize = 1}
    , the
    3 \times 1
    or
    1 \times 3
    kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

    There is also the special value ksize = #FILTER_SCHARR (-1) that corresponds to the

    3\times3
    Scharr filter that may give more accurate results than the
    3\times3
    Sobel. The Scharr aperture is

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}

    for the x-derivative, or transposed for the y-derivative.

    The function calculates an image derivative by convolving the image with the appropriate kernel:

    \texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}

    The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x- or y- image derivative. The first case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}

    The second case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}

    Declaration

    Objective-C

    + (void)Sobel:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
           ddepth:(int)ddepth
               dx:(int)dx
               dy:(int)dy
            ksize:(int)ksize;

    Swift

    class func Sobel(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32, ksize: Int32)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src .

    ddepth

    output image depth, see REF: filter_depths “combinations”; in the case of 8-bit input images it will result in truncated derivatives.

    dx

    order of the derivative x.

    dy

    order of the derivative y.

    ksize

    size of the extended Sobel kernel; it must be 1, 3, 5, or 7. applied (see #getDerivKernels for details).

  • Calculates the first, second, third, or mixed image derivatives using an extended Sobel operator.

    In all cases except one, the

    \texttt{ksize} \times \texttt{ksize}
    separable kernel is used to calculate the derivative. When
    \texttt{ksize = 1}
    , the
    3 \times 1
    or
    1 \times 3
    kernel is used (that is, no Gaussian smoothing is done). ksize = 1 can only be used for the first or the second x- or y- derivatives.

    There is also the special value ksize = #FILTER_SCHARR (-1) that corresponds to the

    3\times3
    Scharr filter that may give more accurate results than the
    3\times3
    Sobel. The Scharr aperture is

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}

    for the x-derivative, or transposed for the y-derivative.

    The function calculates an image derivative by convolving the image with the appropriate kernel:

    \texttt{dst} = \frac{\partial^{xorder+yorder} \texttt{src}}{\partial x^{xorder} \partial y^{yorder}}

    The Sobel operators combine Gaussian smoothing and differentiation, so the result is more or less resistant to the noise. Most often, the function is called with ( xorder = 1, yorder = 0, ksize = 3) or ( xorder = 0, yorder = 1, ksize = 3) to calculate the first x- or y- image derivative. The first case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}

    The second case corresponds to a kernel of:

    \newcommand{\vecthreethree}[9]{ \begin{bmatrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 & #9 \end{bmatrix} } \vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}

    Declaration

    Objective-C

    + (void)Sobel:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
           ddepth:(int)ddepth
               dx:(int)dx
               dy:(int)dy;

    Swift

    class func Sobel(src: Mat, dst: Mat, ddepth: Int32, dx: Int32, dy: Int32)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src .

    ddepth

    output image depth, see REF: filter_depths “combinations”; in the case of 8-bit input images it will result in truncated derivatives.

    dx

    order of the derivative x.

    dy

    order of the derivative y. applied (see #getDerivKernels for details).

  • Adds an image to the accumulator image.

    The function adds src or some of its elements to dst :

    \texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    The function supports multi-channel images. Each channel is processed independently.

    The function cv::accumulate can be used, for example, to collect statistics of a scene background viewed by a still camera and for the further foreground-background segmentation.

    Declaration

    Objective-C

    + (void)accumulate:(nonnull Mat *)src
                   dst:(nonnull Mat *)dst
                  mask:(nonnull Mat *)mask;

    Swift

    class func accumulate(src: Mat, dst: Mat, mask: Mat)

    Parameters

    src

    Input image of type CV_8UC(n), CV_16UC(n), CV_32FC(n) or CV_64FC(n), where n is a positive integer.

    dst

    %Accumulator image with the same number of channels as input image, and a depth of CV_32F or CV_64F.

    mask

    Optional operation mask.

  • Adds an image to the accumulator image.

    The function adds src or some of its elements to dst :

    \texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    The function supports multi-channel images. Each channel is processed independently.

    The function cv::accumulate can be used, for example, to collect statistics of a scene background viewed by a still camera and for the further foreground-background segmentation.

    Declaration

    Objective-C

    + (void)accumulate:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func accumulate(src: Mat, dst: Mat)

    Parameters

    src

    Input image of type CV_8UC(n), CV_16UC(n), CV_32FC(n) or CV_64FC(n), where n is a positive integer.

    dst

    %Accumulator image with the same number of channels as input image, and a depth of CV_32F or CV_64F.

  • Adds the per-element product of two input images to the accumulator image.

    The function adds the product of two images or their selected regions to the accumulator dst :

    \texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src1} (x,y) \cdot \texttt{src2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    The function supports multi-channel images. Each channel is processed independently.

    Declaration

    Objective-C

    + (void)accumulateProduct:(nonnull Mat *)src1
                         src2:(nonnull Mat *)src2
                          dst:(nonnull Mat *)dst
                         mask:(nonnull Mat *)mask;

    Swift

    class func accumulateProduct(src1: Mat, src2: Mat, dst: Mat, mask: Mat)
  • Adds the per-element product of two input images to the accumulator image.

    The function adds the product of two images or their selected regions to the accumulator dst :

    \texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src1} (x,y) \cdot \texttt{src2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    The function supports multi-channel images. Each channel is processed independently.

    Declaration

    Objective-C

    + (void)accumulateProduct:(nonnull Mat *)src1
                         src2:(nonnull Mat *)src2
                          dst:(nonnull Mat *)dst;

    Swift

    class func accumulateProduct(src1: Mat, src2: Mat, dst: Mat)
  • Adds the square of a source image to the accumulator image.

    The function adds the input image src or its selected region, raised to a power of 2, to the accumulator dst :

    \texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    The function supports multi-channel images. Each channel is processed independently.

    Declaration

    Objective-C

    + (void)accumulateSquare:(nonnull Mat *)src
                         dst:(nonnull Mat *)dst
                        mask:(nonnull Mat *)mask;

    Swift

    class func accumulateSquare(src: Mat, dst: Mat, mask: Mat)

    Parameters

    src

    Input image as 1- or 3-channel, 8-bit or 32-bit floating point.

    dst

    %Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.

    mask

    Optional operation mask.

  • Adds the square of a source image to the accumulator image.

    The function adds the input image src or its selected region, raised to a power of 2, to the accumulator dst :

    \texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    The function supports multi-channel images. Each channel is processed independently.

    Declaration

    Objective-C

    + (void)accumulateSquare:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func accumulateSquare(src: Mat, dst: Mat)

    Parameters

    src

    Input image as 1- or 3-channel, 8-bit or 32-bit floating point.

    dst

    %Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.

  • Updates a running average.

    The function calculates the weighted sum of the input image src and the accumulator dst so that dst becomes a running average of a frame sequence:

    \texttt{dst} (x,y) \leftarrow (1- \texttt{alpha} ) \cdot \texttt{dst} (x,y) + \texttt{alpha} \cdot \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    That is, alpha regulates the update speed (how fast the accumulator “forgets” about earlier images). The function supports multi-channel images. Each channel is processed independently.

    Declaration

    Objective-C

    + (void)accumulateWeighted:(nonnull Mat *)src
                           dst:(nonnull Mat *)dst
                         alpha:(double)alpha
                          mask:(nonnull Mat *)mask;

    Swift

    class func accumulateWeighted(src: Mat, dst: Mat, alpha: Double, mask: Mat)
  • Updates a running average.

    The function calculates the weighted sum of the input image src and the accumulator dst so that dst becomes a running average of a frame sequence:

    \texttt{dst} (x,y) \leftarrow (1- \texttt{alpha} ) \cdot \texttt{dst} (x,y) + \texttt{alpha} \cdot \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

    That is, alpha regulates the update speed (how fast the accumulator “forgets” about earlier images). The function supports multi-channel images. Each channel is processed independently.

    Declaration

    Objective-C

    + (void)accumulateWeighted:(nonnull Mat *)src
                           dst:(nonnull Mat *)dst
                         alpha:(double)alpha;

    Swift

    class func accumulateWeighted(src: Mat, dst: Mat, alpha: Double)
  • Applies an adaptive threshold to an array.

    The function transforms a grayscale image to a binary image according to the formulae:

    • THRESH_BINARY
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}
    • THRESH_BINARY_INV
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}
      where
      T(x,y)
      is a threshold calculated individually for each pixel (see adaptiveMethod parameter).

    The function can process the image in-place.

    Declaration

    Objective-C

    + (void)adaptiveThreshold:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                     maxValue:(double)maxValue
               adaptiveMethod:(AdaptiveThresholdTypes)adaptiveMethod
                thresholdType:(ThresholdTypes)thresholdType
                    blockSize:(int)blockSize
                            C:(double)C;

    Swift

    class func adaptiveThreshold(src: Mat, dst: Mat, maxValue: Double, adaptiveMethod: AdaptiveThresholdTypes, thresholdType: ThresholdTypes, blockSize: Int32, C: Double)

    Parameters

    src

    Source 8-bit single-channel image.

    dst

    Destination image of the same size and the same type as src.

    maxValue

    Non-zero value assigned to the pixels for which the condition is satisfied

    adaptiveMethod

    Adaptive thresholding algorithm to use, see #AdaptiveThresholdTypes. The #BORDER_REPLICATE | #BORDER_ISOLATED is used to process boundaries.

    thresholdType

    Thresholding type that must be either #THRESH_BINARY or #THRESH_BINARY_INV, see #ThresholdTypes.

    blockSize

    Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

    C

    Constant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well.

  • Applies a GNU Octave/MATLAB equivalent colormap on a given image.

    Declaration

    Objective-C

    + (void)applyColorMap:(nonnull Mat *)src
                      dst:(nonnull Mat *)dst
                 colormap:(ColormapTypes)colormap;

    Swift

    class func applyColorMap(src: Mat, dst: Mat, colormap: ColormapTypes)

    Parameters

    src

    The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.

    dst

    The result is the colormapped source image. Note: Mat::create is called on dst.

    colormap

    The colormap to apply, see #ColormapTypes

  • Applies a user colormap on a given image.

    Declaration

    Objective-C

    + (void)applyColorMap:(nonnull Mat *)src
                      dst:(nonnull Mat *)dst
                userColor:(nonnull Mat *)userColor;

    Swift

    class func applyColorMap(src: Mat, dst: Mat, userColor: Mat)

    Parameters

    src

    The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.

    dst

    The result is the colormapped source image. Note: Mat::create is called on dst.

    userColor

    The colormap to apply of type CV_8UC1 or CV_8UC3 and size 256

  • Approximates a polygonal curve(s) with the specified precision.

    The function cv::approxPolyDP approximates a curve or a polygon with another curve/polygon with less vertices so that the distance between them is less or equal to the specified precision. It uses the Douglas-Peucker algorithm http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm

    Declaration

    Objective-C

    + (void)approxPolyDP:(nonnull NSArray<Point2f *> *)curve
             approxCurve:(nonnull NSMutableArray<Point2f *> *)approxCurve
                 epsilon:(double)epsilon
                  closed:(BOOL)closed;

    Swift

    class func approxPolyDP(curve: [Point2f], approxCurve: NSMutableArray, epsilon: Double, closed: Bool)

    Parameters

    curve

    Input vector of a 2D point stored in std::vector or Mat

    approxCurve

    Result of the approximation. The type should match the type of the input curve.

    epsilon

    Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation.

    closed

    If true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed.

  • Draws a arrow segment pointing from the first point to the second one.

    The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.

    Declaration

    Objective-C

    + (void)arrowedLine:(nonnull Mat *)img
                    pt1:(nonnull Point2i *)pt1
                    pt2:(nonnull Point2i *)pt2
                  color:(nonnull Scalar *)color
              thickness:(int)thickness
              line_type:(LineTypes)line_type
                  shift:(int)shift
              tipLength:(double)tipLength;

    Swift

    class func arrowedLine(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, line_type: LineTypes, shift: Int32, tipLength: Double)

    Parameters

    img

    Image.

    pt1

    The point the arrow starts from.

    pt2

    The point the arrow points to.

    color

    Line color.

    thickness

    Line thickness.

    line_type

    Type of the line. See #LineTypes

    shift

    Number of fractional bits in the point coordinates.

    tipLength

    The length of the arrow tip in relation to the arrow length

  • Draws a arrow segment pointing from the first point to the second one.

    The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.

    Declaration

    Objective-C

    + (void)arrowedLine:(nonnull Mat *)img
                    pt1:(nonnull Point2i *)pt1
                    pt2:(nonnull Point2i *)pt2
                  color:(nonnull Scalar *)color
              thickness:(int)thickness
              line_type:(LineTypes)line_type
                  shift:(int)shift;

    Swift

    class func arrowedLine(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, line_type: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    pt1

    The point the arrow starts from.

    pt2

    The point the arrow points to.

    color

    Line color.

    thickness

    Line thickness.

    line_type

    Type of the line. See #LineTypes

    shift

    Number of fractional bits in the point coordinates.

  • Draws a arrow segment pointing from the first point to the second one.

    The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.

    Declaration

    Objective-C

    + (void)arrowedLine:(nonnull Mat *)img
                    pt1:(nonnull Point2i *)pt1
                    pt2:(nonnull Point2i *)pt2
                  color:(nonnull Scalar *)color
              thickness:(int)thickness
              line_type:(LineTypes)line_type;

    Swift

    class func arrowedLine(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, line_type: LineTypes)

    Parameters

    img

    Image.

    pt1

    The point the arrow starts from.

    pt2

    The point the arrow points to.

    color

    Line color.

    thickness

    Line thickness.

    line_type

    Type of the line. See #LineTypes

  • Draws a arrow segment pointing from the first point to the second one.

    The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.

    Declaration

    Objective-C

    + (void)arrowedLine:(nonnull Mat *)img
                    pt1:(nonnull Point2i *)pt1
                    pt2:(nonnull Point2i *)pt2
                  color:(nonnull Scalar *)color
              thickness:(int)thickness;

    Swift

    class func arrowedLine(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    pt1

    The point the arrow starts from.

    pt2

    The point the arrow points to.

    color

    Line color.

    thickness

    Line thickness.

  • Draws a arrow segment pointing from the first point to the second one.

    The function cv::arrowedLine draws an arrow between pt1 and pt2 points in the image. See also #line.

    Declaration

    Objective-C

    + (void)arrowedLine:(nonnull Mat *)img
                    pt1:(nonnull Point2i *)pt1
                    pt2:(nonnull Point2i *)pt2
                  color:(nonnull Scalar *)color;

    Swift

    class func arrowedLine(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar)

    Parameters

    img

    Image.

    pt1

    The point the arrow starts from.

    pt2

    The point the arrow points to.

    color

    Line color.

  • Applies the bilateral filter to an image.

    The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.

    Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.

    Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

    This filter does not work inplace.

    Declaration

    Objective-C

    + (void)bilateralFilter:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                          d:(int)d
                 sigmaColor:(double)sigmaColor
                 sigmaSpace:(double)sigmaSpace
                 borderType:(BorderTypes)borderType;

    Swift

    class func bilateralFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double, borderType: BorderTypes)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src .

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.

    borderType

    border mode used to extrapolate pixels outside of the image, see #BorderTypes

  • Applies the bilateral filter to an image.

    The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.

    Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.

    Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

    This filter does not work inplace.

    Declaration

    Objective-C

    + (void)bilateralFilter:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                          d:(int)d
                 sigmaColor:(double)sigmaColor
                 sigmaSpace:(double)sigmaSpace;

    Swift

    class func bilateralFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src .

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0, it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.

  • Blurs an image using the normalized box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \frac{1}{\texttt{ksize.width*ksize.height}} \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \ \end{bmatrix}

    The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), ksize, anchor, true, borderType).

    Declaration

    Objective-C

    + (void)blur:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
             ksize:(nonnull Size2i *)ksize
            anchor:(nonnull Point2i *)anchor
        borderType:(BorderTypes)borderType;

    Swift

    class func blur(src: Mat, dst: Mat, ksize: Size2i, anchor: Point2i, borderType: BorderTypes)

    Parameters

    src

    input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    ksize

    blurring kernel size.

    anchor

    anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

    borderType

    border mode used to extrapolate pixels outside of the image, see #BorderTypes. #BORDER_WRAP is not supported.

  • Blurs an image using the normalized box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \frac{1}{\texttt{ksize.width*ksize.height}} \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \ \end{bmatrix}

    The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), ksize, anchor, true, borderType).

    Declaration

    Objective-C

    + (void)blur:(nonnull Mat *)src
             dst:(nonnull Mat *)dst
           ksize:(nonnull Size2i *)ksize
          anchor:(nonnull Point2i *)anchor;

    Swift

    class func blur(src: Mat, dst: Mat, ksize: Size2i, anchor: Point2i)

    Parameters

    src

    input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    ksize

    blurring kernel size.

    anchor

    anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

  • Blurs an image using the normalized box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \frac{1}{\texttt{ksize.width*ksize.height}} \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \ \end{bmatrix}

    The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), ksize, anchor, true, borderType).

    Declaration

    Objective-C

    + (void)blur:(nonnull Mat *)src
             dst:(nonnull Mat *)dst
           ksize:(nonnull Size2i *)ksize;

    Swift

    class func blur(src: Mat, dst: Mat, ksize: Size2i)

    Parameters

    src

    input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    ksize

    blurring kernel size. center.

  • Blurs an image using the box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}

    where

    \alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}

    Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use #integral.

    Declaration

    Objective-C

    + (void)boxFilter:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(nonnull Size2i *)ksize
               anchor:(nonnull Point2i *)anchor
            normalize:(BOOL)normalize
           borderType:(BorderTypes)borderType;

    Swift

    class func boxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i, normalize: Bool, borderType: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image of the same size and type as src.

    ddepth

    the output image depth (-1 to use src.depth()).

    ksize

    blurring kernel size.

    anchor

    anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

    normalize

    flag, specifying whether the kernel is normalized by its area or not.

    borderType

    border mode used to extrapolate pixels outside of the image, see #BorderTypes. #BORDER_WRAP is not supported.

  • Blurs an image using the box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}

    where

    \alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}

    Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use #integral.

    Declaration

    Objective-C

    + (void)boxFilter:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(nonnull Size2i *)ksize
               anchor:(nonnull Point2i *)anchor
            normalize:(BOOL)normalize;

    Swift

    class func boxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i, normalize: Bool)

    Parameters

    src

    input image.

    dst

    output image of the same size and type as src.

    ddepth

    the output image depth (-1 to use src.depth()).

    ksize

    blurring kernel size.

    anchor

    anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

    normalize

    flag, specifying whether the kernel is normalized by its area or not.

  • Blurs an image using the box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}

    where

    \alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}

    Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use #integral.

    Declaration

    Objective-C

    + (void)boxFilter:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(nonnull Size2i *)ksize
               anchor:(nonnull Point2i *)anchor;

    Swift

    class func boxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i)

    Parameters

    src

    input image.

    dst

    output image of the same size and type as src.

    ddepth

    the output image depth (-1 to use src.depth()).

    ksize

    blurring kernel size.

    anchor

    anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

  • Blurs an image using the box filter.

    The function smooths an image using the kernel:

    \texttt{K} = \alpha \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 & 1 \ 1 & 1 & 1 & \cdots & 1 & 1 \ \hdotsfor{6} \ 1 & 1 & 1 & \cdots & 1 & 1 \end{bmatrix}

    where

    \alpha = \begin{cases} \frac{1}{\texttt{ksize.width*ksize.height}} & \texttt{when } \texttt{normalize=true} \1 & \texttt{otherwise}\end{cases}

    Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use #integral.

    Declaration

    Objective-C

    + (void)boxFilter:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
               ddepth:(int)ddepth
                ksize:(nonnull Size2i *)ksize;

    Swift

    class func boxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i)

    Parameters

    src

    input image.

    dst

    output image of the same size and type as src.

    ddepth

    the output image depth (-1 to use src.depth()).

    ksize

    blurring kernel size. center.

  • Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.

    The function finds the four vertices of a rotated rectangle. This function is useful to draw the rectangle. In C++, instead of using this function, you can directly use RotatedRect::points method. Please visit the REF: tutorial_bounding_rotated_ellipses “tutorial on Creating Bounding rotated boxes and ellipses for contours” for more information.

    Declaration

    Objective-C

    + (void)boxPoints:(nonnull RotatedRect *)box points:(nonnull Mat *)points;

    Swift

    class func boxPoints(box: RotatedRect, points: Mat)

    Parameters

    box

    The input rotated rectangle. It may be the output of

    points

    The output array of four vertices of rectangles.

  • Declaration

    Objective-C

    + (void)calcBackProject:(NSArray<Mat*>*)images channels:(IntVector*)channels hist:(Mat*)hist dst:(Mat*)dst ranges:(FloatVector*)ranges scale:(double)scale NS_SWIFT_NAME(calcBackProject(images:channels:hist:dst:ranges:scale:));

    Swift

    class func calcBackProject(images: [Mat], channels: IntVector, hist: Mat, dst: Mat, ranges: FloatVector, scale: Double)
  • Declaration

    Objective-C

    + (void)calcHist:(NSArray<Mat*>*)images channels:(IntVector*)channels mask:(Mat*)mask hist:(Mat*)hist histSize:(IntVector*)histSize ranges:(FloatVector*)ranges accumulate:(BOOL)accumulate NS_SWIFT_NAME(calcHist(images:channels:mask:hist:histSize:ranges:accumulate:));

    Swift

    class func calcHist(images: [Mat], channels: IntVector, mask: Mat, hist: Mat, histSize: IntVector, ranges: FloatVector, accumulate: Bool)
  • Declaration

    Objective-C

    + (void)calcHist:(NSArray<Mat*>*)images channels:(IntVector*)channels mask:(Mat*)mask hist:(Mat*)hist histSize:(IntVector*)histSize ranges:(FloatVector*)ranges NS_SWIFT_NAME(calcHist(images:channels:mask:hist:histSize:ranges:));

    Swift

    class func calcHist(images: [Mat], channels: IntVector, mask: Mat, hist: Mat, histSize: IntVector, ranges: FloatVector)
  • Draws a circle.

    The function cv::circle draws a simple or filled circle with a given center and radius.

    Declaration

    Objective-C

    + (void)circle:(nonnull Mat *)img
            center:(nonnull Point2i *)center
            radius:(int)radius
             color:(nonnull Scalar *)color
         thickness:(int)thickness
          lineType:(LineTypes)lineType
             shift:(int)shift;

    Swift

    class func circle(img: Mat, center: Point2i, radius: Int32, color: Scalar, thickness: Int32, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image where the circle is drawn.

    center

    Center of the circle.

    radius

    Radius of the circle.

    color

    Circle color.

    thickness

    Thickness of the circle outline, if positive. Negative values, like #FILLED, mean that a filled circle is to be drawn.

    lineType

    Type of the circle boundary. See #LineTypes

    shift

    Number of fractional bits in the coordinates of the center and in the radius value.

  • Draws a circle.

    The function cv::circle draws a simple or filled circle with a given center and radius.

    Declaration

    Objective-C

    + (void)circle:(nonnull Mat *)img
            center:(nonnull Point2i *)center
            radius:(int)radius
             color:(nonnull Scalar *)color
         thickness:(int)thickness
          lineType:(LineTypes)lineType;

    Swift

    class func circle(img: Mat, center: Point2i, radius: Int32, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image where the circle is drawn.

    center

    Center of the circle.

    radius

    Radius of the circle.

    color

    Circle color.

    thickness

    Thickness of the circle outline, if positive. Negative values, like #FILLED, mean that a filled circle is to be drawn.

    lineType

    Type of the circle boundary. See #LineTypes

  • Draws a circle.

    The function cv::circle draws a simple or filled circle with a given center and radius.

    Declaration

    Objective-C

    + (void)circle:(nonnull Mat *)img
            center:(nonnull Point2i *)center
            radius:(int)radius
             color:(nonnull Scalar *)color
         thickness:(int)thickness;

    Swift

    class func circle(img: Mat, center: Point2i, radius: Int32, color: Scalar, thickness: Int32)

    Parameters

    img

    Image where the circle is drawn.

    center

    Center of the circle.

    radius

    Radius of the circle.

    color

    Circle color.

    thickness

    Thickness of the circle outline, if positive. Negative values, like #FILLED, mean that a filled circle is to be drawn.

  • Draws a circle.

    The function cv::circle draws a simple or filled circle with a given center and radius.

    Declaration

    Objective-C

    + (void)circle:(nonnull Mat *)img
            center:(nonnull Point2i *)center
            radius:(int)radius
             color:(nonnull Scalar *)color;

    Swift

    class func circle(img: Mat, center: Point2i, radius: Int32, color: Scalar)

    Parameters

    img

    Image where the circle is drawn.

    center

    Center of the circle.

    radius

    Radius of the circle.

    color

    Circle color. mean that a filled circle is to be drawn.

  • Converts image transformation maps from one representation to another.

    The function converts a pair of maps for remap from one representation to another. The following options ( (map1.type(), map2.type())

    \rightarrow
    (dstmap1.type(), dstmap2.type()) ) are supported:

    • \texttt{(CV\_32FC1, CV\_32FC1)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}
      . This is the most frequently used conversion operation, in which the original floating-point maps (see remap ) are converted to a more compact and much faster fixed-point representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false ) contains indices in the interpolation tables.

    • \texttt{(CV\_32FC2)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}
      . The same as above but the original maps are stored in one 2-channel matrix.

    • Reverse conversion. Obviously, the reconstructed floating-point maps will not be exactly the same as the originals.

    See

    +remap:dst:map1:map2:interpolation:borderMode:borderValue:, undistort, initUndistortRectifyMap

    Declaration

    Objective-C

    + (void)convertMaps:(nonnull Mat *)map1
                   map2:(nonnull Mat *)map2
                dstmap1:(nonnull Mat *)dstmap1
                dstmap2:(nonnull Mat *)dstmap2
            dstmap1type:(int)dstmap1type
        nninterpolation:(BOOL)nninterpolation;

    Swift

    class func convertMaps(map1: Mat, map2: Mat, dstmap1: Mat, dstmap2: Mat, dstmap1type: Int32, nninterpolation: Bool)

    Parameters

    map1

    The first input map of type CV_16SC2, CV_32FC1, or CV_32FC2 .

    map2

    The second input map of type CV_16UC1, CV_32FC1, or none (empty matrix), respectively.

    dstmap1

    The first output map that has the type dstmap1type and the same size as src .

    dstmap2

    The second output map.

    dstmap1type

    Type of the first output map that should be CV_16SC2, CV_32FC1, or CV_32FC2 .

    nninterpolation

    Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation.

  • Converts image transformation maps from one representation to another.

    The function converts a pair of maps for remap from one representation to another. The following options ( (map1.type(), map2.type())

    \rightarrow
    (dstmap1.type(), dstmap2.type()) ) are supported:

    • \texttt{(CV\_32FC1, CV\_32FC1)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}
      . This is the most frequently used conversion operation, in which the original floating-point maps (see remap ) are converted to a more compact and much faster fixed-point representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false ) contains indices in the interpolation tables.

    • \texttt{(CV\_32FC2)} \rightarrow \texttt{(CV\_16SC2, CV\_16UC1)}
      . The same as above but the original maps are stored in one 2-channel matrix.

    • Reverse conversion. Obviously, the reconstructed floating-point maps will not be exactly the same as the originals.

    See

    +remap:dst:map1:map2:interpolation:borderMode:borderValue:, undistort, initUndistortRectifyMap

    Declaration

    Objective-C

    + (void)convertMaps:(nonnull Mat *)map1
                   map2:(nonnull Mat *)map2
                dstmap1:(nonnull Mat *)dstmap1
                dstmap2:(nonnull Mat *)dstmap2
            dstmap1type:(int)dstmap1type;

    Swift

    class func convertMaps(map1: Mat, map2: Mat, dstmap1: Mat, dstmap2: Mat, dstmap1type: Int32)

    Parameters

    map1

    The first input map of type CV_16SC2, CV_32FC1, or CV_32FC2 .

    map2

    The second input map of type CV_16UC1, CV_32FC1, or none (empty matrix), respectively.

    dstmap1

    The first output map that has the type dstmap1type and the same size as src .

    dstmap2

    The second output map.

    dstmap1type

    Type of the first output map that should be CV_16SC2, CV_32FC1, or CV_32FC2 . nearest-neighbor or for a more complex interpolation.

  • Finds the convex hull of a point set.

    The function cv::convexHull finds the convex hull of a 2D point set using the Sklansky’s algorithm CITE: Sklansky82 that has O(N logN) complexity in the current implementation.

    Note

    points and hull should be different arrays, inplace processing isn’t supported.

    Check REF: tutorial_hull “the corresponding tutorial” for more details.

    useful links:

    https://www.learnopencv.com/convex-hull-using-opencv-in-python-and-c/

    Declaration

    Objective-C

    + (void)convexHull:(nonnull NSArray<Point2i *> *)points
                  hull:(nonnull IntVector *)hull
             clockwise:(BOOL)clockwise;

    Swift

    class func convexHull(points: [Point2i], hull: IntVector, clockwise: Bool)

    Parameters

    points

    Input 2D point set, stored in std::vector or Mat.

    hull

    Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.

    clockwise

    Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.

    returnPoints

    Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector: std::vector<int> implies returnPoints=false, std::vector<Point> implies returnPoints=true.

  • Finds the convex hull of a point set.

    The function cv::convexHull finds the convex hull of a 2D point set using the Sklansky’s algorithm CITE: Sklansky82 that has O(N logN) complexity in the current implementation.

    Note

    points and hull should be different arrays, inplace processing isn’t supported.

    Check REF: tutorial_hull “the corresponding tutorial” for more details.

    useful links:

    https://www.learnopencv.com/convex-hull-using-opencv-in-python-and-c/

    Declaration

    Objective-C

    + (void)convexHull:(nonnull NSArray<Point2i *> *)points
                  hull:(nonnull IntVector *)hull;

    Swift

    class func convexHull(points: [Point2i], hull: IntVector)

    Parameters

    points

    Input 2D point set, stored in std::vector or Mat.

    hull

    Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards. returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector: std::vector<int> implies returnPoints=false, std::vector<Point> implies returnPoints=true.

  • Finds the convexity defects of a contour.

    The figure below displays convexity defects of a hand contour:

    image

    Declaration

    Objective-C

    + (void)convexityDefects:(nonnull NSArray<Point2i *> *)contour
                  convexhull:(nonnull IntVector *)convexhull
            convexityDefects:(nonnull NSMutableArray<Int4 *> *)convexityDefects;

    Swift

    class func convexityDefects(contour: [Point2i], convexhull: IntVector, convexityDefects: NSMutableArray)

    Parameters

    contour

    Input contour.

    convexhull

    Convex hull obtained using convexHull that should contain indices of the contour points that make the hull.

    convexityDefects

    The output vector of convexity defects. In C++ and the new Python/Java interface each convexity defect is represented as 4-element integer vector (a.k.a. #Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.

  • Calculates eigenvalues and eigenvectors of image blocks for corner detection.

    For every pixel

    p
    , the function cornerEigenValsAndVecs considers a blockSize
    \times
    blockSize neighborhood
    S(p)
    . It calculates the covariation matrix of derivatives over the neighborhood as:

    M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}dI/dx dI/dy \ \sum _{S(p)}dI/dx dI/dy & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}

    where the derivatives are computed using the Sobel operator.

    After that, it finds eigenvectors and eigenvalues of

    M
    and stores them in the destination image as
    (\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)
    where

    • \lambda_1, \lambda_2
      are the non-sorted eigenvalues of
      M
    • x_1, y_1
      are the eigenvectors corresponding to
      \lambda_1
    • x_2, y_2
      are the eigenvectors corresponding to
      \lambda_2

    The output of the function can be used for robust edge or corner detection.

    Declaration

    Objective-C

    + (void)cornerEigenValsAndVecs:(nonnull Mat *)src
                               dst:(nonnull Mat *)dst
                         blockSize:(int)blockSize
                             ksize:(int)ksize
                        borderType:(BorderTypes)borderType;

    Swift

    class func cornerEigenValsAndVecs(src: Mat, dst: Mat, blockSize: Int32, ksize: Int32, borderType: BorderTypes)

    Parameters

    src

    Input single-channel 8-bit or floating-point image.

    dst

    Image to store the results. It has the same size as src and the type CV_32FC(6) .

    blockSize

    Neighborhood size (see details below).

    ksize

    Aperture parameter for the Sobel operator.

    borderType

    Pixel extrapolation method. See #BorderTypes. #BORDER_WRAP is not supported.

  • Calculates eigenvalues and eigenvectors of image blocks for corner detection.

    For every pixel

    p
    , the function cornerEigenValsAndVecs considers a blockSize
    \times
    blockSize neighborhood
    S(p)
    . It calculates the covariation matrix of derivatives over the neighborhood as:

    M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}dI/dx dI/dy \ \sum _{S(p)}dI/dx dI/dy & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}

    where the derivatives are computed using the Sobel operator.

    After that, it finds eigenvectors and eigenvalues of

    M
    and stores them in the destination image as
    (\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)
    where

    • \lambda_1, \lambda_2
      are the non-sorted eigenvalues of
      M
    • x_1, y_1
      are the eigenvectors corresponding to
      \lambda_1
    • x_2, y_2
      are the eigenvectors corresponding to
      \lambda_2

    The output of the function can be used for robust edge or corner detection.

    Declaration

    Objective-C

    + (void)cornerEigenValsAndVecs:(nonnull Mat *)src
                               dst:(nonnull Mat *)dst
                         blockSize:(int)blockSize
                             ksize:(int)ksize;

    Swift

    class func cornerEigenValsAndVecs(src: Mat, dst: Mat, blockSize: Int32, ksize: Int32)

    Parameters

    src

    Input single-channel 8-bit or floating-point image.

    dst

    Image to store the results. It has the same size as src and the type CV_32FC(6) .

    blockSize

    Neighborhood size (see details below).

    ksize

    Aperture parameter for the Sobel operator.

  • Harris corner detector.

    The function runs the Harris corner detector on the image. Similarly to cornerMinEigenVal and cornerEigenValsAndVecs , for each pixel

    (x, y)
    it calculates a
    2\times2
    gradient covariance matrix
    M^{(x,y)}
    over a
    \texttt{blockSize} \times \texttt{blockSize}
    neighborhood. Then, it computes the following characteristic:

    \texttt{dst} (x,y) = \mathrm{det} M^{(x,y)} - k \cdot \left ( \mathrm{tr} M^{(x,y)} \right )^2

    Corners in the image can be found as the local maxima of this response map.

    Declaration

    Objective-C

    + (void)cornerHarris:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
               blockSize:(int)blockSize
                   ksize:(int)ksize
                       k:(double)k
              borderType:(BorderTypes)borderType;

    Swift

    class func cornerHarris(src: Mat, dst: Mat, blockSize: Int32, ksize: Int32, k: Double, borderType: BorderTypes)
  • Harris corner detector.

    The function runs the Harris corner detector on the image. Similarly to cornerMinEigenVal and cornerEigenValsAndVecs , for each pixel

    (x, y)
    it calculates a
    2\times2
    gradient covariance matrix
    M^{(x,y)}
    over a
    \texttt{blockSize} \times \texttt{blockSize}
    neighborhood. Then, it computes the following characteristic:

    \texttt{dst} (x,y) = \mathrm{det} M^{(x,y)} - k \cdot \left ( \mathrm{tr} M^{(x,y)} \right )^2

    Corners in the image can be found as the local maxima of this response map.

    Declaration

    Objective-C

    + (void)cornerHarris:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
               blockSize:(int)blockSize
                   ksize:(int)ksize
                       k:(double)k;

    Swift

    class func cornerHarris(src: Mat, dst: Mat, blockSize: Int32, ksize: Int32, k: Double)
  • Calculates the minimal eigenvalue of gradient matrices for corner detection.

    The function is similar to cornerEigenValsAndVecs but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is,

    \min(\lambda_1, \lambda_2)
    in terms of the formulae in the cornerEigenValsAndVecs description.

    Declaration

    Objective-C

    + (void)cornerMinEigenVal:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                    blockSize:(int)blockSize
                        ksize:(int)ksize
                   borderType:(BorderTypes)borderType;

    Swift

    class func cornerMinEigenVal(src: Mat, dst: Mat, blockSize: Int32, ksize: Int32, borderType: BorderTypes)

    Parameters

    src

    Input single-channel 8-bit or floating-point image.

    dst

    Image to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .

    blockSize

    Neighborhood size (see the details on #cornerEigenValsAndVecs ).

    ksize

    Aperture parameter for the Sobel operator.

    borderType

    Pixel extrapolation method. See #BorderTypes. #BORDER_WRAP is not supported.

  • Calculates the minimal eigenvalue of gradient matrices for corner detection.

    The function is similar to cornerEigenValsAndVecs but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is,

    \min(\lambda_1, \lambda_2)
    in terms of the formulae in the cornerEigenValsAndVecs description.

    Declaration

    Objective-C

    + (void)cornerMinEigenVal:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                    blockSize:(int)blockSize
                        ksize:(int)ksize;

    Swift

    class func cornerMinEigenVal(src: Mat, dst: Mat, blockSize: Int32, ksize: Int32)

    Parameters

    src

    Input single-channel 8-bit or floating-point image.

    dst

    Image to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .

    blockSize

    Neighborhood size (see the details on #cornerEigenValsAndVecs ).

    ksize

    Aperture parameter for the Sobel operator.

  • Calculates the minimal eigenvalue of gradient matrices for corner detection.

    The function is similar to cornerEigenValsAndVecs but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is,

    \min(\lambda_1, \lambda_2)
    in terms of the formulae in the cornerEigenValsAndVecs description.

    Declaration

    Objective-C

    + (void)cornerMinEigenVal:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                    blockSize:(int)blockSize;

    Swift

    class func cornerMinEigenVal(src: Mat, dst: Mat, blockSize: Int32)

    Parameters

    src

    Input single-channel 8-bit or floating-point image.

    dst

    Image to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .

    blockSize

    Neighborhood size (see the details on #cornerEigenValsAndVecs ).

  • Refines the corner locations.

    The function iterates to find the sub-pixel accurate location of corners or radial saddle points, as shown on the figure below.

    image

    Sub-pixel accurate corner locator is based on the observation that every vector from the center

    q
    to a point
    p
    located within a neighborhood of
    q
    is orthogonal to the image gradient at
    p
    subject to image and measurement noise. Consider the expression:

    \epsilon _i = {DI_{p_i}}^T \cdot (q - p_i)

    where

    {DI_{p_i}}
    is an image gradient at one of the points
    p_i
    in a neighborhood of
    q
    . The value of
    q
    is to be found so that
    \epsilon_i
    is minimized. A system of equations may be set up with
    \epsilon_i
    set to zero:

    \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T) \cdot q - \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i)

    where the gradients are summed within a neighborhood (“search window”) of

    q
    . Calling the first gradient term
    G
    and the second gradient term
    b
    gives:

    q = G^{-1} \cdot b

    The algorithm sets the center of the neighborhood window at this new center

    q
    and then iterates until the center stays within a set threshold.

    Declaration

    Objective-C

    + (void)cornerSubPix:(nonnull Mat *)image
                 corners:(nonnull Mat *)corners
                 winSize:(nonnull Size2i *)winSize
                zeroZone:(nonnull Size2i *)zeroZone
                criteria:(nonnull TermCriteria *)criteria;

    Swift

    class func cornerSubPix(image: Mat, corners: Mat, winSize: Size2i, zeroZone: Size2i, criteria: TermCriteria)
  • This function computes a Hanning window coefficients in two dimensions.

    See (http://en.wikipedia.org/wiki/Hann_function) and (http://en.wikipedia.org/wiki/Window_function) for more information.

    An example is shown below:

     // create hanning window of size 100x100 and type CV_32F
     Mat hann;
     createHanningWindow(hann, Size(100, 100), CV_32F);
    

    Declaration

    Objective-C

    + (void)createHanningWindow:(nonnull Mat *)dst
                        winSize:(nonnull Size2i *)winSize
                           type:(int)type;

    Swift

    class func createHanningWindow(dst: Mat, winSize: Size2i, type: Int32)

    Parameters

    dst

    Destination array to place Hann coefficients in

    winSize

    The window size specifications (both width and height must be > 1)

    type

    Created array type

  • Converts an image from one color space to another.

    The function converts an input image from one color space to another. In case of a transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.

    The conventional ranges for R, G, and B channel values are:

    • 0 to 255 for CV_8U images
    • 0 to 65535 for CV_16U images
    • 0 to 1 for CV_32F images

    In case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB

    \rightarrow
    L*u*v* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling #cvtColor , you need first to scale the image down:

     img *= 1./255;
     cvtColor(img, img, COLOR_BGR2Luv);
    

    If you use #cvtColor with 8-bit images, the conversion will have some information lost. For many applications, this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back.

    If conversion adds the alpha channel, its value will set to the maximum of corresponding channel range: 255 for CV_8U, 65535 for CV_16U, 1 for CV_32F.

    See

    REF: imgproc_color_conversions

    Declaration

    Objective-C

    + (void)cvtColor:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
                code:(ColorConversionCodes)code
               dstCn:(int)dstCn;

    Swift

    class func cvtColor(src: Mat, dst: Mat, code: ColorConversionCodes, dstCn: Int32)

    Parameters

    src

    input image: 8-bit unsigned, 16-bit unsigned ( CV_16UC… ), or single-precision floating-point.

    dst

    output image of the same size and depth as src.

    code

    color space conversion code (see #ColorConversionCodes).

    dstCn

    number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.

  • Converts an image from one color space to another.

    The function converts an input image from one color space to another. In case of a transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.

    The conventional ranges for R, G, and B channel values are:

    • 0 to 255 for CV_8U images
    • 0 to 65535 for CV_16U images
    • 0 to 1 for CV_32F images

    In case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB

    \rightarrow
    L*u*v* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling #cvtColor , you need first to scale the image down:

     img *= 1./255;
     cvtColor(img, img, COLOR_BGR2Luv);
    

    If you use #cvtColor with 8-bit images, the conversion will have some information lost. For many applications, this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back.

    If conversion adds the alpha channel, its value will set to the maximum of corresponding channel range: 255 for CV_8U, 65535 for CV_16U, 1 for CV_32F.

    See

    REF: imgproc_color_conversions

    Declaration

    Objective-C

    + (void)cvtColor:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
                code:(ColorConversionCodes)code;

    Swift

    class func cvtColor(src: Mat, dst: Mat, code: ColorConversionCodes)

    Parameters

    src

    input image: 8-bit unsigned, 16-bit unsigned ( CV_16UC… ), or single-precision floating-point.

    dst

    output image of the same size and depth as src.

    code

    color space conversion code (see #ColorConversionCodes). channels is derived automatically from src and code.

  • Converts an image from one color space to another where the source image is stored in two planes.

    This function only supports YUV420 to RGB conversion as of now.

    • #COLOR_YUV2BGR_NV12
    • #COLOR_YUV2RGB_NV12
    • #COLOR_YUV2BGRA_NV12
    • #COLOR_YUV2RGBA_NV12
    • #COLOR_YUV2BGR_NV21
    • #COLOR_YUV2RGB_NV21
    • #COLOR_YUV2BGRA_NV21
    • #COLOR_YUV2RGBA_NV21

    Declaration

    Objective-C

    + (void)cvtColorTwoPlane:(nonnull Mat *)src1
                        src2:(nonnull Mat *)src2
                         dst:(nonnull Mat *)dst
                        code:(int)code;

    Swift

    class func cvtColorTwoPlane(src1: Mat, src2: Mat, dst: Mat, code: Int32)
  • main function for all demosaicing processes

    The function can do the following transformations:

    • Demosaicing using bilinear interpolation

      #COLOR_BayerBG2BGR , #COLOR_BayerGB2BGR , #COLOR_BayerRG2BGR , #COLOR_BayerGR2BGR

      #COLOR_BayerBG2GRAY , #COLOR_BayerGB2GRAY , #COLOR_BayerRG2GRAY , #COLOR_BayerGR2GRAY

    • Demosaicing using Variable Number of Gradients.

      #COLOR_BayerBG2BGR_VNG , #COLOR_BayerGB2BGR_VNG , #COLOR_BayerRG2BGR_VNG , #COLOR_BayerGR2BGR_VNG

    • Edge-Aware Demosaicing.

      #COLOR_BayerBG2BGR_EA , #COLOR_BayerGB2BGR_EA , #COLOR_BayerRG2BGR_EA , #COLOR_BayerGR2BGR_EA

    • Demosaicing with alpha channel

      #COLOR_BayerBG2BGRA , #COLOR_BayerGB2BGRA , #COLOR_BayerRG2BGRA , #COLOR_BayerGR2BGRA

    Declaration

    Objective-C

    + (void)demosaicing:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                   code:(int)code
                  dstCn:(int)dstCn;

    Swift

    class func demosaicing(src: Mat, dst: Mat, code: Int32, dstCn: Int32)

    Parameters

    src

    input image: 8-bit unsigned or 16-bit unsigned.

    dst

    output image of the same size and depth as src.

    code

    Color space conversion code (see the description below).

    dstCn

    number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.

  • main function for all demosaicing processes

    The function can do the following transformations:

    • Demosaicing using bilinear interpolation

      #COLOR_BayerBG2BGR , #COLOR_BayerGB2BGR , #COLOR_BayerRG2BGR , #COLOR_BayerGR2BGR

      #COLOR_BayerBG2GRAY , #COLOR_BayerGB2GRAY , #COLOR_BayerRG2GRAY , #COLOR_BayerGR2GRAY

    • Demosaicing using Variable Number of Gradients.

      #COLOR_BayerBG2BGR_VNG , #COLOR_BayerGB2BGR_VNG , #COLOR_BayerRG2BGR_VNG , #COLOR_BayerGR2BGR_VNG

    • Edge-Aware Demosaicing.

      #COLOR_BayerBG2BGR_EA , #COLOR_BayerGB2BGR_EA , #COLOR_BayerRG2BGR_EA , #COLOR_BayerGR2BGR_EA

    • Demosaicing with alpha channel

      #COLOR_BayerBG2BGRA , #COLOR_BayerGB2BGRA , #COLOR_BayerRG2BGRA , #COLOR_BayerGR2BGRA

    Declaration

    Objective-C

    + (void)demosaicing:(nonnull Mat *)src dst:(nonnull Mat *)dst code:(int)code;

    Swift

    class func demosaicing(src: Mat, dst: Mat, code: Int32)

    Parameters

    src

    input image: 8-bit unsigned or 16-bit unsigned.

    dst

    output image of the same size and depth as src.

    code

    Color space conversion code (see the description below). channels is derived automatically from src and code.

  • Dilates an image by using a specific structuring element.

    The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

    \texttt{dst} (x,y) = \max _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)dilate:(nonnull Mat *)src
                dst:(nonnull Mat *)dst
             kernel:(nonnull Mat *)kernel
             anchor:(nonnull Point2i *)anchor
         iterations:(int)iterations
         borderType:(BorderTypes)borderType
        borderValue:(nonnull Scalar *)borderValue;

    Swift

    class func dilate(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i, iterations: Int32, borderType: BorderTypes, borderValue: Scalar)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    iterations

    number of times dilation is applied.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not suported.

    borderValue

    border value in case of a constant border

  • Dilates an image by using a specific structuring element.

    The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

    \texttt{dst} (x,y) = \max _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)dilate:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            kernel:(nonnull Mat *)kernel
            anchor:(nonnull Point2i *)anchor
        iterations:(int)iterations
        borderType:(BorderTypes)borderType;

    Swift

    class func dilate(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i, iterations: Int32, borderType: BorderTypes)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    iterations

    number of times dilation is applied.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not suported.

  • Dilates an image by using a specific structuring element.

    The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

    \texttt{dst} (x,y) = \max _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)dilate:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            kernel:(nonnull Mat *)kernel
            anchor:(nonnull Point2i *)anchor
        iterations:(int)iterations;

    Swift

    class func dilate(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i, iterations: Int32)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    iterations

    number of times dilation is applied.

  • Dilates an image by using a specific structuring element.

    The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

    \texttt{dst} (x,y) = \max _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)dilate:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            kernel:(nonnull Mat *)kernel
            anchor:(nonnull Point2i *)anchor;

    Swift

    class func dilate(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

  • Dilates an image by using a specific structuring element.

    The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

    \texttt{dst} (x,y) = \max _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)dilate:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            kernel:(nonnull Mat *)kernel;

    Swift

    class func dilate(src: Mat, dst: Mat, kernel: Mat)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for dilation; if elemenat=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement anchor is at the element center.

  • Declaration

    Objective-C

    + (void)distanceTransform:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                 distanceType:(DistanceTypes)distanceType
                     maskSize:(DistanceTransformMasks)maskSize
                      dstType:(int)dstType;

    Swift

    class func distanceTransform(src: Mat, dst: Mat, distanceType: DistanceTypes, maskSize: DistanceTransformMasks, dstType: Int32)

    Parameters

    src

    8-bit, single-channel (binary) source image.

    dst

    Output image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src .

    distanceType

    Type of distance, see #DistanceTypes

    maskSize

    Size of the distance transform mask, see #DistanceTransformMasks. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a

    3\times 3
    mask gives the same result as
    5\times 5
    or any larger aperture.

    dstType

    Type of output image. It can be CV_8U or CV_32F. Type CV_8U can be used only for the first variant of the function and distanceType == #DIST_L1.

  • Declaration

    Objective-C

    + (void)distanceTransform:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                 distanceType:(DistanceTypes)distanceType
                     maskSize:(DistanceTransformMasks)maskSize;

    Swift

    class func distanceTransform(src: Mat, dst: Mat, distanceType: DistanceTypes, maskSize: DistanceTransformMasks)

    Parameters

    src

    8-bit, single-channel (binary) source image.

    dst

    Output image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src .

    distanceType

    Type of distance, see #DistanceTypes

    maskSize

    Size of the distance transform mask, see #DistanceTransformMasks. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a

    3\times 3
    mask gives the same result as
    5\times 5
    or any larger aperture. the first variant of the function and distanceType == #DIST_L1.

  • Calculates the distance to the closest zero pixel for each pixel of the source image.

    The function cv::distanceTransform calculates the approximate or precise distance from every binary image pixel to the nearest zero pixel. For zero image pixels, the distance will obviously be zero.

    When maskSize == #DIST_MASK_PRECISE and distanceType == #DIST_L2 , the function runs the algorithm described in CITE: Felzenszwalb04 . This algorithm is parallelized with the TBB library.

    In other cases, the algorithm CITE: Borgefors86 is used. This means that for a pixel the function finds the shortest path to the nearest zero pixel consisting of basic shifts: horizontal, vertical, diagonal, or knight’s move (the latest is available for a

    5\times 5
    mask). The overall distance is calculated as a sum of these basic distances. Since the distance function should be symmetric, all of the horizontal and vertical shifts must have the same cost (denoted as a ), all the diagonal shifts must have the same cost (denoted as b), and all knight’s moves must have the same cost (denoted as c). For the #DIST_C and #DIST_L1 types, the distance is calculated precisely, whereas for #DIST_L2 (Euclidean distance) the distance can be calculated only with a relative error (a
    5\times 5
    mask gives more accurate results). For a,b, and c, OpenCV uses the values suggested in the original paper:

    • DIST_L1: a = 1, b = 2
    • DIST_L2:
      • 3 x 3: a=0.955, b=1.3693
      • 5 x 5: a=1, b=1.4, c=2.1969
    • DIST_C: a = 1, b = 1

    Typically, for a fast, coarse distance estimation #DIST_L2, a

    3\times 3
    mask is used. For a more accurate distance estimation #DIST_L2, a
    5\times 5
    mask or the precise algorithm is used. Note that both the precise and the approximate algorithms are linear on the number of pixels.

    This variant of the function does not only compute the minimum distance for each pixel

    (x, y)
    but also identifies the nearest connected component consisting of zero pixels (labelType==#DIST_LABEL_CCOMP) or the nearest zero pixel (labelType==#DIST_LABEL_PIXEL). Index of the component/pixel is stored in labels(x, y). When labelType==#DIST_LABEL_CCOMP, the function automatically finds connected components of zero pixels in the input image and marks them with distinct labels. When labelType==#DIST_LABEL_CCOMP, the function scans through the input image and marks all the zero pixels with distinct labels.

    In this mode, the complexity is still linear. That is, the function provides a very fast way to compute the Voronoi diagram for a binary image. Currently, the second variant can use only the approximate distance transform algorithm, i.e. maskSize=#DIST_MASK_PRECISE is not supported yet.

    Declaration

    Objective-C

    + (void)distanceTransformWithLabels:(nonnull Mat *)src
                                    dst:(nonnull Mat *)dst
                                 labels:(nonnull Mat *)labels
                           distanceType:(DistanceTypes)distanceType
                               maskSize:(DistanceTransformMasks)maskSize
                              labelType:(DistanceTransformLabelTypes)labelType;

    Swift

    class func distanceTransform(src: Mat, dst: Mat, labels: Mat, distanceType: DistanceTypes, maskSize: DistanceTransformMasks, labelType: DistanceTransformLabelTypes)

    Parameters

    src

    8-bit, single-channel (binary) source image.

    dst

    Output image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src.

    labels

    Output 2D array of labels (the discrete Voronoi diagram). It has the type CV_32SC1 and the same size as src.

    distanceType

    Type of distance, see #DistanceTypes

    maskSize

    Size of the distance transform mask, see #DistanceTransformMasks. #DIST_MASK_PRECISE is not supported by this variant. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a

    3\times 3
    mask gives the same result as
    5\times 5
    or any larger aperture.

    labelType

    Type of the label array to build, see #DistanceTransformLabelTypes.

  • Calculates the distance to the closest zero pixel for each pixel of the source image.

    The function cv::distanceTransform calculates the approximate or precise distance from every binary image pixel to the nearest zero pixel. For zero image pixels, the distance will obviously be zero.

    When maskSize == #DIST_MASK_PRECISE and distanceType == #DIST_L2 , the function runs the algorithm described in CITE: Felzenszwalb04 . This algorithm is parallelized with the TBB library.

    In other cases, the algorithm CITE: Borgefors86 is used. This means that for a pixel the function finds the shortest path to the nearest zero pixel consisting of basic shifts: horizontal, vertical, diagonal, or knight’s move (the latest is available for a

    5\times 5
    mask). The overall distance is calculated as a sum of these basic distances. Since the distance function should be symmetric, all of the horizontal and vertical shifts must have the same cost (denoted as a ), all the diagonal shifts must have the same cost (denoted as b), and all knight’s moves must have the same cost (denoted as c). For the #DIST_C and #DIST_L1 types, the distance is calculated precisely, whereas for #DIST_L2 (Euclidean distance) the distance can be calculated only with a relative error (a
    5\times 5
    mask gives more accurate results). For a,b, and c, OpenCV uses the values suggested in the original paper:

    • DIST_L1: a = 1, b = 2
    • DIST_L2:
      • 3 x 3: a=0.955, b=1.3693
      • 5 x 5: a=1, b=1.4, c=2.1969
    • DIST_C: a = 1, b = 1

    Typically, for a fast, coarse distance estimation #DIST_L2, a

    3\times 3
    mask is used. For a more accurate distance estimation #DIST_L2, a
    5\times 5
    mask or the precise algorithm is used. Note that both the precise and the approximate algorithms are linear on the number of pixels.

    This variant of the function does not only compute the minimum distance for each pixel

    (x, y)
    but also identifies the nearest connected component consisting of zero pixels (labelType==#DIST_LABEL_CCOMP) or the nearest zero pixel (labelType==#DIST_LABEL_PIXEL). Index of the component/pixel is stored in labels(x, y). When labelType==#DIST_LABEL_CCOMP, the function automatically finds connected components of zero pixels in the input image and marks them with distinct labels. When labelType==#DIST_LABEL_CCOMP, the function scans through the input image and marks all the zero pixels with distinct labels.

    In this mode, the complexity is still linear. That is, the function provides a very fast way to compute the Voronoi diagram for a binary image. Currently, the second variant can use only the approximate distance transform algorithm, i.e. maskSize=#DIST_MASK_PRECISE is not supported yet.

    Declaration

    Objective-C

    + (void)distanceTransformWithLabels:(nonnull Mat *)src
                                    dst:(nonnull Mat *)dst
                                 labels:(nonnull Mat *)labels
                           distanceType:(DistanceTypes)distanceType
                               maskSize:(DistanceTransformMasks)maskSize;

    Swift

    class func distanceTransform(src: Mat, dst: Mat, labels: Mat, distanceType: DistanceTypes, maskSize: DistanceTransformMasks)

    Parameters

    src

    8-bit, single-channel (binary) source image.

    dst

    Output image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src.

    labels

    Output 2D array of labels (the discrete Voronoi diagram). It has the type CV_32SC1 and the same size as src.

    distanceType

    Type of distance, see #DistanceTypes

    maskSize

    Size of the distance transform mask, see #DistanceTransformMasks. #DIST_MASK_PRECISE is not supported by this variant. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a

    3\times 3
    mask gives the same result as
    5\times 5
    or any larger aperture.

  • Draws contours outlines or filled contours.

    The function draws contour outlines in the image if

    \texttt{thickness} \ge 0
    or fills the area bounded by the contours if
    \texttt{thickness}<0
    . The example below shows how to retrieve connected components from the binary image and label them: : INCLUDE: snippets/imgproc_drawContours.cpp

    Note

    When thickness=#FILLED, the function is designed to handle connected components with holes correctly even when no hierarchy date is provided. This is done by analyzing all the outlines together using even-odd rule. This may give incorrect results if you have a joint collection of separately retrieved contours. In order to solve this problem, you need to call #drawContours separately for each sub-group of contours, or iterate over the collection using contourIdx parameter.

    Declaration

    Objective-C

    + (void)drawContours:(nonnull Mat *)image
                contours:(nonnull NSArray<NSArray<Point2i *> *> *)contours
              contourIdx:(int)contourIdx
                   color:(nonnull Scalar *)color
               thickness:(int)thickness
                lineType:(LineTypes)lineType
               hierarchy:(nonnull Mat *)hierarchy
                maxLevel:(int)maxLevel
                  offset:(nonnull Point2i *)offset;

    Swift

    class func drawContours(image: Mat, contours: [[Point2i]], contourIdx: Int32, color: Scalar, thickness: Int32, lineType: LineTypes, hierarchy: Mat, maxLevel: Int32, offset: Point2i)

    Parameters

    image

    Destination image.

    contours

    All the input contours. Each contour is stored as a point vector.

    contourIdx

    Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

    color

    Color of the contours.

    thickness

    Thickness of lines the contours are drawn with. If it is negative (for example, thickness=#FILLED ), the contour interiors are drawn.

    lineType

    Line connectivity. See #LineTypes

    hierarchy

    Optional information about hierarchy. It is only needed if you want to draw only some of the contours (see maxLevel ).

    maxLevel

    Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

    offset

    Optional contour shift parameter. Shift all the drawn contours by the specified

    \texttt{offset}=(dx,dy)
    .

  • Draws contours outlines or filled contours.

    The function draws contour outlines in the image if

    \texttt{thickness} \ge 0
    or fills the area bounded by the contours if
    \texttt{thickness}<0
    . The example below shows how to retrieve connected components from the binary image and label them: : INCLUDE: snippets/imgproc_drawContours.cpp

    Note

    When thickness=#FILLED, the function is designed to handle connected components with holes correctly even when no hierarchy date is provided. This is done by analyzing all the outlines together using even-odd rule. This may give incorrect results if you have a joint collection of separately retrieved contours. In order to solve this problem, you need to call #drawContours separately for each sub-group of contours, or iterate over the collection using contourIdx parameter.

    Declaration

    Objective-C

    + (void)drawContours:(nonnull Mat *)image
                contours:(nonnull NSArray<NSArray<Point2i *> *> *)contours
              contourIdx:(int)contourIdx
                   color:(nonnull Scalar *)color
               thickness:(int)thickness
                lineType:(LineTypes)lineType
               hierarchy:(nonnull Mat *)hierarchy
                maxLevel:(int)maxLevel;

    Swift

    class func drawContours(image: Mat, contours: [[Point2i]], contourIdx: Int32, color: Scalar, thickness: Int32, lineType: LineTypes, hierarchy: Mat, maxLevel: Int32)

    Parameters

    image

    Destination image.

    contours

    All the input contours. Each contour is stored as a point vector.

    contourIdx

    Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

    color

    Color of the contours.

    thickness

    Thickness of lines the contours are drawn with. If it is negative (for example, thickness=#FILLED ), the contour interiors are drawn.

    lineType

    Line connectivity. See #LineTypes

    hierarchy

    Optional information about hierarchy. It is only needed if you want to draw only some of the contours (see maxLevel ).

    maxLevel

    Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

    \texttt{offset}=(dx,dy)
    .

  • Draws contours outlines or filled contours.

    The function draws contour outlines in the image if

    \texttt{thickness} \ge 0
    or fills the area bounded by the contours if
    \texttt{thickness}<0
    . The example below shows how to retrieve connected components from the binary image and label them: : INCLUDE: snippets/imgproc_drawContours.cpp

    Note

    When thickness=#FILLED, the function is designed to handle connected components with holes correctly even when no hierarchy date is provided. This is done by analyzing all the outlines together using even-odd rule. This may give incorrect results if you have a joint collection of separately retrieved contours. In order to solve this problem, you need to call #drawContours separately for each sub-group of contours, or iterate over the collection using contourIdx parameter.

    Declaration

    Objective-C

    + (void)drawContours:(nonnull Mat *)image
                contours:(nonnull NSArray<NSArray<Point2i *> *> *)contours
              contourIdx:(int)contourIdx
                   color:(nonnull Scalar *)color
               thickness:(int)thickness
                lineType:(LineTypes)lineType
               hierarchy:(nonnull Mat *)hierarchy;

    Swift

    class func drawContours(image: Mat, contours: [[Point2i]], contourIdx: Int32, color: Scalar, thickness: Int32, lineType: LineTypes, hierarchy: Mat)

    Parameters

    image

    Destination image.

    contours

    All the input contours. Each contour is stored as a point vector.

    contourIdx

    Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

    color

    Color of the contours.

    thickness

    Thickness of lines the contours are drawn with. If it is negative (for example, thickness=#FILLED ), the contour interiors are drawn.

    lineType

    Line connectivity. See #LineTypes

    hierarchy

    Optional information about hierarchy. It is only needed if you want to draw only some of the contours (see maxLevel ). If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

    \texttt{offset}=(dx,dy)
    .

  • Draws contours outlines or filled contours.

    The function draws contour outlines in the image if

    \texttt{thickness} \ge 0
    or fills the area bounded by the contours if
    \texttt{thickness}<0
    . The example below shows how to retrieve connected components from the binary image and label them: : INCLUDE: snippets/imgproc_drawContours.cpp

    Note

    When thickness=#FILLED, the function is designed to handle connected components with holes correctly even when no hierarchy date is provided. This is done by analyzing all the outlines together using even-odd rule. This may give incorrect results if you have a joint collection of separately retrieved contours. In order to solve this problem, you need to call #drawContours separately for each sub-group of contours, or iterate over the collection using contourIdx parameter.

    Declaration

    Objective-C

    + (void)drawContours:(nonnull Mat *)image
                contours:(nonnull NSArray<NSArray<Point2i *> *> *)contours
              contourIdx:(int)contourIdx
                   color:(nonnull Scalar *)color
               thickness:(int)thickness
                lineType:(LineTypes)lineType;

    Swift

    class func drawContours(image: Mat, contours: [[Point2i]], contourIdx: Int32, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    image

    Destination image.

    contours

    All the input contours. Each contour is stored as a point vector.

    contourIdx

    Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

    color

    Color of the contours.

    thickness

    Thickness of lines the contours are drawn with. If it is negative (for example, thickness=#FILLED ), the contour interiors are drawn.

    lineType

    Line connectivity. See #LineTypes some of the contours (see maxLevel ). If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

    \texttt{offset}=(dx,dy)
    .

  • Draws contours outlines or filled contours.

    The function draws contour outlines in the image if

    \texttt{thickness} \ge 0
    or fills the area bounded by the contours if
    \texttt{thickness}<0
    . The example below shows how to retrieve connected components from the binary image and label them: : INCLUDE: snippets/imgproc_drawContours.cpp

    Note

    When thickness=#FILLED, the function is designed to handle connected components with holes correctly even when no hierarchy date is provided. This is done by analyzing all the outlines together using even-odd rule. This may give incorrect results if you have a joint collection of separately retrieved contours. In order to solve this problem, you need to call #drawContours separately for each sub-group of contours, or iterate over the collection using contourIdx parameter.

    Declaration

    Objective-C

    + (void)drawContours:(nonnull Mat *)image
                contours:(nonnull NSArray<NSArray<Point2i *> *> *)contours
              contourIdx:(int)contourIdx
                   color:(nonnull Scalar *)color
               thickness:(int)thickness;

    Swift

    class func drawContours(image: Mat, contours: [[Point2i]], contourIdx: Int32, color: Scalar, thickness: Int32)

    Parameters

    image

    Destination image.

    contours

    All the input contours. Each contour is stored as a point vector.

    contourIdx

    Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

    color

    Color of the contours.

    thickness

    Thickness of lines the contours are drawn with. If it is negative (for example, thickness=#FILLED ), the contour interiors are drawn. some of the contours (see maxLevel ). If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

    \texttt{offset}=(dx,dy)
    .

  • Draws contours outlines or filled contours.

    The function draws contour outlines in the image if

    \texttt{thickness} \ge 0
    or fills the area bounded by the contours if
    \texttt{thickness}<0
    . The example below shows how to retrieve connected components from the binary image and label them: : INCLUDE: snippets/imgproc_drawContours.cpp

    Note

    When thickness=#FILLED, the function is designed to handle connected components with holes correctly even when no hierarchy date is provided. This is done by analyzing all the outlines together using even-odd rule. This may give incorrect results if you have a joint collection of separately retrieved contours. In order to solve this problem, you need to call #drawContours separately for each sub-group of contours, or iterate over the collection using contourIdx parameter.

    Declaration

    Objective-C

    + (void)drawContours:(nonnull Mat *)image
                contours:(nonnull NSArray<NSArray<Point2i *> *> *)contours
              contourIdx:(int)contourIdx
                   color:(nonnull Scalar *)color;

    Swift

    class func drawContours(image: Mat, contours: [[Point2i]], contourIdx: Int32, color: Scalar)

    Parameters

    image

    Destination image.

    contours

    All the input contours. Each contour is stored as a point vector.

    contourIdx

    Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

    color

    Color of the contours. thickness=#FILLED ), the contour interiors are drawn. some of the contours (see maxLevel ). If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

    \texttt{offset}=(dx,dy)
    .

  • Draws a marker on a predefined position in an image.

    The function cv::drawMarker draws a marker on a given position in the image. For the moment several marker types are supported, see #MarkerTypes for more information.

    Declaration

    Objective-C

    + (void)drawMarker:(nonnull Mat *)img
              position:(nonnull Point2i *)position
                 color:(nonnull Scalar *)color
            markerType:(MarkerTypes)markerType
            markerSize:(int)markerSize
             thickness:(int)thickness
             line_type:(LineTypes)line_type;

    Swift

    class func drawMarker(img: Mat, position: Point2i, color: Scalar, markerType: MarkerTypes, markerSize: Int32, thickness: Int32, line_type: LineTypes)

    Parameters

    img

    Image.

    position

    The point where the crosshair is positioned.

    color

    Line color.

    markerType

    The specific type of marker you want to use, see #MarkerTypes

    thickness

    Line thickness.

    line_type

    Type of the line, See #LineTypes

    markerSize

    The length of the marker axis [default = 20 pixels]

  • Draws a marker on a predefined position in an image.

    The function cv::drawMarker draws a marker on a given position in the image. For the moment several marker types are supported, see #MarkerTypes for more information.

    Declaration

    Objective-C

    + (void)drawMarker:(nonnull Mat *)img
              position:(nonnull Point2i *)position
                 color:(nonnull Scalar *)color
            markerType:(MarkerTypes)markerType
            markerSize:(int)markerSize
             thickness:(int)thickness;

    Swift

    class func drawMarker(img: Mat, position: Point2i, color: Scalar, markerType: MarkerTypes, markerSize: Int32, thickness: Int32)

    Parameters

    img

    Image.

    position

    The point where the crosshair is positioned.

    color

    Line color.

    markerType

    The specific type of marker you want to use, see #MarkerTypes

    thickness

    Line thickness.

    markerSize

    The length of the marker axis [default = 20 pixels]

  • Draws a marker on a predefined position in an image.

    The function cv::drawMarker draws a marker on a given position in the image. For the moment several marker types are supported, see #MarkerTypes for more information.

    Declaration

    Objective-C

    + (void)drawMarker:(nonnull Mat *)img
              position:(nonnull Point2i *)position
                 color:(nonnull Scalar *)color
            markerType:(MarkerTypes)markerType
            markerSize:(int)markerSize;

    Swift

    class func drawMarker(img: Mat, position: Point2i, color: Scalar, markerType: MarkerTypes, markerSize: Int32)

    Parameters

    img

    Image.

    position

    The point where the crosshair is positioned.

    color

    Line color.

    markerType

    The specific type of marker you want to use, see #MarkerTypes

    markerSize

    The length of the marker axis [default = 20 pixels]

  • Draws a marker on a predefined position in an image.

    The function cv::drawMarker draws a marker on a given position in the image. For the moment several marker types are supported, see #MarkerTypes for more information.

    Declaration

    Objective-C

    + (void)drawMarker:(nonnull Mat *)img
              position:(nonnull Point2i *)position
                 color:(nonnull Scalar *)color
            markerType:(MarkerTypes)markerType;

    Swift

    class func drawMarker(img: Mat, position: Point2i, color: Scalar, markerType: MarkerTypes)

    Parameters

    img

    Image.

    position

    The point where the crosshair is positioned.

    color

    Line color.

    markerType

    The specific type of marker you want to use, see #MarkerTypes

  • Draws a marker on a predefined position in an image.

    The function cv::drawMarker draws a marker on a given position in the image. For the moment several marker types are supported, see #MarkerTypes for more information.

    Declaration

    Objective-C

    + (void)drawMarker:(nonnull Mat *)img
              position:(nonnull Point2i *)position
                 color:(nonnull Scalar *)color;

    Swift

    class func drawMarker(img: Mat, position: Point2i, color: Scalar)

    Parameters

    img

    Image.

    position

    The point where the crosshair is positioned.

    color

    Line color.

  • Draws a simple or thick elliptic arc or fills an ellipse sector.

    The function cv::ellipse with more parameters draws an ellipse outline, a filled ellipse, an elliptic arc, or a filled ellipse sector. The drawing code uses general parametric form. A piecewise-linear curve is used to approximate the elliptic arc boundary. If you need more control of the ellipse rendering, you can retrieve the curve using #ellipse2Poly and then render it with #polylines or fill it with #fillPoly. If you use the first variant of the function and want to draw the whole ellipse, not an arc, pass startAngle=0 and endAngle=360. If startAngle is greater than endAngle, they are swapped. The figure below explains the meaning of the parameters to draw the blue arc.

    Parameters of Elliptic Arc

    Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
             center:(nonnull Point2i *)center
               axes:(nonnull Size2i *)axes
              angle:(double)angle
         startAngle:(double)startAngle
           endAngle:(double)endAngle
              color:(nonnull Scalar *)color
          thickness:(int)thickness
           lineType:(LineTypes)lineType
              shift:(int)shift;

    Swift

    class func ellipse(img: Mat, center: Point2i, axes: Size2i, angle: Double, startAngle: Double, endAngle: Double, color: Scalar, thickness: Int32, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    center

    Center of the ellipse.

    axes

    Half of the size of the ellipse main axes.

    angle

    Ellipse rotation angle in degrees.

    startAngle

    Starting angle of the elliptic arc in degrees.

    endAngle

    Ending angle of the elliptic arc in degrees.

    color

    Ellipse color.

    thickness

    Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.

    lineType

    Type of the ellipse boundary. See #LineTypes

    shift

    Number of fractional bits in the coordinates of the center and values of axes.

  • Draws a simple or thick elliptic arc or fills an ellipse sector.

    The function cv::ellipse with more parameters draws an ellipse outline, a filled ellipse, an elliptic arc, or a filled ellipse sector. The drawing code uses general parametric form. A piecewise-linear curve is used to approximate the elliptic arc boundary. If you need more control of the ellipse rendering, you can retrieve the curve using #ellipse2Poly and then render it with #polylines or fill it with #fillPoly. If you use the first variant of the function and want to draw the whole ellipse, not an arc, pass startAngle=0 and endAngle=360. If startAngle is greater than endAngle, they are swapped. The figure below explains the meaning of the parameters to draw the blue arc.

    Parameters of Elliptic Arc

    Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
             center:(nonnull Point2i *)center
               axes:(nonnull Size2i *)axes
              angle:(double)angle
         startAngle:(double)startAngle
           endAngle:(double)endAngle
              color:(nonnull Scalar *)color
          thickness:(int)thickness
           lineType:(LineTypes)lineType;

    Swift

    class func ellipse(img: Mat, center: Point2i, axes: Size2i, angle: Double, startAngle: Double, endAngle: Double, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image.

    center

    Center of the ellipse.

    axes

    Half of the size of the ellipse main axes.

    angle

    Ellipse rotation angle in degrees.

    startAngle

    Starting angle of the elliptic arc in degrees.

    endAngle

    Ending angle of the elliptic arc in degrees.

    color

    Ellipse color.

    thickness

    Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.

    lineType

    Type of the ellipse boundary. See #LineTypes

  • Draws a simple or thick elliptic arc or fills an ellipse sector.

    The function cv::ellipse with more parameters draws an ellipse outline, a filled ellipse, an elliptic arc, or a filled ellipse sector. The drawing code uses general parametric form. A piecewise-linear curve is used to approximate the elliptic arc boundary. If you need more control of the ellipse rendering, you can retrieve the curve using #ellipse2Poly and then render it with #polylines or fill it with #fillPoly. If you use the first variant of the function and want to draw the whole ellipse, not an arc, pass startAngle=0 and endAngle=360. If startAngle is greater than endAngle, they are swapped. The figure below explains the meaning of the parameters to draw the blue arc.

    Parameters of Elliptic Arc

    Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
             center:(nonnull Point2i *)center
               axes:(nonnull Size2i *)axes
              angle:(double)angle
         startAngle:(double)startAngle
           endAngle:(double)endAngle
              color:(nonnull Scalar *)color
          thickness:(int)thickness;

    Swift

    class func ellipse(img: Mat, center: Point2i, axes: Size2i, angle: Double, startAngle: Double, endAngle: Double, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    center

    Center of the ellipse.

    axes

    Half of the size of the ellipse main axes.

    angle

    Ellipse rotation angle in degrees.

    startAngle

    Starting angle of the elliptic arc in degrees.

    endAngle

    Ending angle of the elliptic arc in degrees.

    color

    Ellipse color.

    thickness

    Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.

  • Draws a simple or thick elliptic arc or fills an ellipse sector.

    The function cv::ellipse with more parameters draws an ellipse outline, a filled ellipse, an elliptic arc, or a filled ellipse sector. The drawing code uses general parametric form. A piecewise-linear curve is used to approximate the elliptic arc boundary. If you need more control of the ellipse rendering, you can retrieve the curve using #ellipse2Poly and then render it with #polylines or fill it with #fillPoly. If you use the first variant of the function and want to draw the whole ellipse, not an arc, pass startAngle=0 and endAngle=360. If startAngle is greater than endAngle, they are swapped. The figure below explains the meaning of the parameters to draw the blue arc.

    Parameters of Elliptic Arc

    Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
             center:(nonnull Point2i *)center
               axes:(nonnull Size2i *)axes
              angle:(double)angle
         startAngle:(double)startAngle
           endAngle:(double)endAngle
              color:(nonnull Scalar *)color;

    Swift

    class func ellipse(img: Mat, center: Point2i, axes: Size2i, angle: Double, startAngle: Double, endAngle: Double, color: Scalar)

    Parameters

    img

    Image.

    center

    Center of the ellipse.

    axes

    Half of the size of the ellipse main axes.

    angle

    Ellipse rotation angle in degrees.

    startAngle

    Starting angle of the elliptic arc in degrees.

    endAngle

    Ending angle of the elliptic arc in degrees.

    color

    Ellipse color. a filled ellipse sector is to be drawn.

  • Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
                box:(nonnull RotatedRect *)box
              color:(nonnull Scalar *)color
          thickness:(int)thickness
           lineType:(LineTypes)lineType;

    Swift

    class func ellipse(img: Mat, box: RotatedRect, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image.

    box

    Alternative ellipse representation via RotatedRect. This means that the function draws an ellipse inscribed in the rotated rectangle.

    color

    Ellipse color.

    thickness

    Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.

    lineType

    Type of the ellipse boundary. See #LineTypes

  • Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
                box:(nonnull RotatedRect *)box
              color:(nonnull Scalar *)color
          thickness:(int)thickness;

    Swift

    class func ellipse(img: Mat, box: RotatedRect, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    box

    Alternative ellipse representation via RotatedRect. This means that the function draws an ellipse inscribed in the rotated rectangle.

    color

    Ellipse color.

    thickness

    Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.

  • Declaration

    Objective-C

    + (void)ellipse:(nonnull Mat *)img
                box:(nonnull RotatedRect *)box
              color:(nonnull Scalar *)color;

    Swift

    class func ellipse(img: Mat, box: RotatedRect, color: Scalar)

    Parameters

    img

    Image.

    box

    Alternative ellipse representation via RotatedRect. This means that the function draws an ellipse inscribed in the rotated rectangle.

    color

    Ellipse color. a filled ellipse sector is to be drawn.

  • Approximates an elliptic arc with a polyline.

    The function ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc. It is used by #ellipse. If arcStart is greater than arcEnd, they are swapped.

    Declaration

    Objective-C

    + (void)ellipse2Poly:(nonnull Point2i *)center
                    axes:(nonnull Size2i *)axes
                   angle:(int)angle
                arcStart:(int)arcStart
                  arcEnd:(int)arcEnd
                   delta:(int)delta
                     pts:(nonnull NSMutableArray<Point2i *> *)pts;

    Swift

    class func ellipse2Poly(center: Point2i, axes: Size2i, angle: Int32, arcStart: Int32, arcEnd: Int32, delta: Int32, pts: NSMutableArray)

    Parameters

    center

    Center of the arc.

    axes

    Half of the size of the ellipse main axes. See #ellipse for details.

    angle

    Rotation angle of the ellipse in degrees. See #ellipse for details.

    arcStart

    Starting angle of the elliptic arc in degrees.

    arcEnd

    Ending angle of the elliptic arc in degrees.

    delta

    Angle between the subsequent polyline vertices. It defines the approximation accuracy.

    pts

    Output vector of polyline vertices.

  • Equalizes the histogram of a grayscale image.

    The function equalizes the histogram of the input image using the following algorithm:

    • Calculate the histogram
      H
      for src .
    • Normalize the histogram so that the sum of histogram bins is 255.
    • Compute the integral of the histogram:
      H’_i = \sum _{0 \le j < i} H(j)
    • Transform the image using
      H’
      as a look-up table:
      \texttt{dst}(x,y) = H’(\texttt{src}(x,y))

    The algorithm normalizes the brightness and increases the contrast of the image.

    Declaration

    Objective-C

    + (void)equalizeHist:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func equalizeHist(src: Mat, dst: Mat)

    Parameters

    src

    Source 8-bit single channel image.

    dst

    Destination image of the same size and type as src .

  • Erodes an image by using a specific structuring element.

    The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:

    \texttt{dst} (x,y) = \min _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Erosion can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)erode:(nonnull Mat *)src
                dst:(nonnull Mat *)dst
             kernel:(nonnull Mat *)kernel
             anchor:(nonnull Point2i *)anchor
         iterations:(int)iterations
         borderType:(BorderTypes)borderType
        borderValue:(nonnull Scalar *)borderValue;

    Swift

    class func erode(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i, iterations: Int32, borderType: BorderTypes, borderValue: Scalar)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for erosion; if element=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement.

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    iterations

    number of times erosion is applied.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

    borderValue

    border value in case of a constant border

  • Erodes an image by using a specific structuring element.

    The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:

    \texttt{dst} (x,y) = \min _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Erosion can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)erode:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            kernel:(nonnull Mat *)kernel
            anchor:(nonnull Point2i *)anchor
        iterations:(int)iterations
        borderType:(BorderTypes)borderType;

    Swift

    class func erode(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i, iterations: Int32, borderType: BorderTypes)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for erosion; if element=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement.

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    iterations

    number of times erosion is applied.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Erodes an image by using a specific structuring element.

    The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:

    \texttt{dst} (x,y) = \min _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Erosion can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)erode:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
            kernel:(nonnull Mat *)kernel
            anchor:(nonnull Point2i *)anchor
        iterations:(int)iterations;

    Swift

    class func erode(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i, iterations: Int32)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for erosion; if element=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement.

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

    iterations

    number of times erosion is applied.

  • Erodes an image by using a specific structuring element.

    The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:

    \texttt{dst} (x,y) = \min _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Erosion can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)erode:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
           kernel:(nonnull Mat *)kernel
           anchor:(nonnull Point2i *)anchor;

    Swift

    class func erode(src: Mat, dst: Mat, kernel: Mat, anchor: Point2i)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for erosion; if element=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement.

    anchor

    position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

  • Erodes an image by using a specific structuring element.

    The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:

    \texttt{dst} (x,y) = \min _{(x’,y’): \, \texttt{element} (x’,y’) \ne0 } \texttt{src} (x+x’,y+y’)

    The function supports the in-place mode. Erosion can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

    Declaration

    Objective-C

    + (void)erode:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
           kernel:(nonnull Mat *)kernel;

    Swift

    class func erode(src: Mat, dst: Mat, kernel: Mat)

    Parameters

    src

    input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    output image of the same size and type as src.

    kernel

    structuring element used for erosion; if element=Mat(), a 3 x 3 rectangular structuring element is used. Kernel can be created using #getStructuringElement. anchor is at the element center.

  • Fills a convex polygon.

    The function cv::fillConvexPoly draws a filled convex polygon. This function is much faster than the function #fillPoly . It can fill not only convex polygons but any monotonic polygon without self-intersections, that is, a polygon whose contour intersects every horizontal line (scan line) twice at the most (though, its top-most and/or the bottom edge could be horizontal).

    Declaration

    Objective-C

    + (void)fillConvexPoly:(nonnull Mat *)img
                    points:(nonnull NSArray<Point2i *> *)points
                     color:(nonnull Scalar *)color
                  lineType:(LineTypes)lineType
                     shift:(int)shift;

    Swift

    class func fillConvexPoly(img: Mat, points: [Point2i], color: Scalar, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    points

    Polygon vertices.

    color

    Polygon color.

    lineType

    Type of the polygon boundaries. See #LineTypes

    shift

    Number of fractional bits in the vertex coordinates.

  • Fills a convex polygon.

    The function cv::fillConvexPoly draws a filled convex polygon. This function is much faster than the function #fillPoly . It can fill not only convex polygons but any monotonic polygon without self-intersections, that is, a polygon whose contour intersects every horizontal line (scan line) twice at the most (though, its top-most and/or the bottom edge could be horizontal).

    Declaration

    Objective-C

    + (void)fillConvexPoly:(nonnull Mat *)img
                    points:(nonnull NSArray<Point2i *> *)points
                     color:(nonnull Scalar *)color
                  lineType:(LineTypes)lineType;

    Swift

    class func fillConvexPoly(img: Mat, points: [Point2i], color: Scalar, lineType: LineTypes)

    Parameters

    img

    Image.

    points

    Polygon vertices.

    color

    Polygon color.

    lineType

    Type of the polygon boundaries. See #LineTypes

  • Fills a convex polygon.

    The function cv::fillConvexPoly draws a filled convex polygon. This function is much faster than the function #fillPoly . It can fill not only convex polygons but any monotonic polygon without self-intersections, that is, a polygon whose contour intersects every horizontal line (scan line) twice at the most (though, its top-most and/or the bottom edge could be horizontal).

    Declaration

    Objective-C

    + (void)fillConvexPoly:(nonnull Mat *)img
                    points:(nonnull NSArray<Point2i *> *)points
                     color:(nonnull Scalar *)color;

    Swift

    class func fillConvexPoly(img: Mat, points: [Point2i], color: Scalar)

    Parameters

    img

    Image.

    points

    Polygon vertices.

    color

    Polygon color.

  • Fills the area bounded by one or more polygons.

    The function cv::fillPoly fills an area bounded by several polygonal contours. The function can fill complex areas, for example, areas with holes, contours with self-intersections (some of their parts), and so forth.

    Declaration

    Objective-C

    + (void)fillPoly:(nonnull Mat *)img
                 pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
               color:(nonnull Scalar *)color
            lineType:(LineTypes)lineType
               shift:(int)shift
              offset:(nonnull Point2i *)offset;

    Swift

    class func fillPoly(img: Mat, pts: [[Point2i]], color: Scalar, lineType: LineTypes, shift: Int32, offset: Point2i)

    Parameters

    img

    Image.

    pts

    Array of polygons where each polygon is represented as an array of points.

    color

    Polygon color.

    lineType

    Type of the polygon boundaries. See #LineTypes

    shift

    Number of fractional bits in the vertex coordinates.

    offset

    Optional offset of all points of the contours.

  • Fills the area bounded by one or more polygons.

    The function cv::fillPoly fills an area bounded by several polygonal contours. The function can fill complex areas, for example, areas with holes, contours with self-intersections (some of their parts), and so forth.

    Declaration

    Objective-C

    + (void)fillPoly:(nonnull Mat *)img
                 pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
               color:(nonnull Scalar *)color
            lineType:(LineTypes)lineType
               shift:(int)shift;

    Swift

    class func fillPoly(img: Mat, pts: [[Point2i]], color: Scalar, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    pts

    Array of polygons where each polygon is represented as an array of points.

    color

    Polygon color.

    lineType

    Type of the polygon boundaries. See #LineTypes

    shift

    Number of fractional bits in the vertex coordinates.

  • Fills the area bounded by one or more polygons.

    The function cv::fillPoly fills an area bounded by several polygonal contours. The function can fill complex areas, for example, areas with holes, contours with self-intersections (some of their parts), and so forth.

    Declaration

    Objective-C

    + (void)fillPoly:(nonnull Mat *)img
                 pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
               color:(nonnull Scalar *)color
            lineType:(LineTypes)lineType;

    Swift

    class func fillPoly(img: Mat, pts: [[Point2i]], color: Scalar, lineType: LineTypes)

    Parameters

    img

    Image.

    pts

    Array of polygons where each polygon is represented as an array of points.

    color

    Polygon color.

    lineType

    Type of the polygon boundaries. See #LineTypes

  • Fills the area bounded by one or more polygons.

    The function cv::fillPoly fills an area bounded by several polygonal contours. The function can fill complex areas, for example, areas with holes, contours with self-intersections (some of their parts), and so forth.

    Declaration

    Objective-C

    + (void)fillPoly:(nonnull Mat *)img
                 pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
               color:(nonnull Scalar *)color;

    Swift

    class func fillPoly(img: Mat, pts: [[Point2i]], color: Scalar)

    Parameters

    img

    Image.

    pts

    Array of polygons where each polygon is represented as an array of points.

    color

    Polygon color.

  • Convolves an image with the kernel.

    The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

    The function does actually compute correlation, not the convolution:

    \texttt{dst} (x,y) = \sum _{ \substack{0\leq x’ < \texttt{kernel.cols}\{0\leq y’ < \texttt{kernel.rows}}}} \texttt{kernel} (x’,y’)* \texttt{src} (x+x’- \texttt{anchor.x} ,y+y’- \texttt{anchor.y} )

    That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using #flip and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

    The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

    Declaration

    Objective-C

    + (void)filter2D:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              ddepth:(int)ddepth
              kernel:(nonnull Mat *)kernel
              anchor:(nonnull Point2i *)anchor
               delta:(double)delta
          borderType:(BorderTypes)borderType;

    Swift

    class func filter2D(src: Mat, dst: Mat, ddepth: Int32, kernel: Mat, anchor: Point2i, delta: Double, borderType: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    desired depth of the destination image, see REF: filter_depths “combinations”

    kernel

    convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually.

    anchor

    anchor of the kernel that indicates the relative position of a filtered point within the kernel; the anchor should lie within the kernel; default value (-1,-1) means that the anchor is at the kernel center.

    delta

    optional value added to the filtered pixels before storing them in dst.

    borderType

    pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Convolves an image with the kernel.

    The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

    The function does actually compute correlation, not the convolution:

    \texttt{dst} (x,y) = \sum _{ \substack{0\leq x’ < \texttt{kernel.cols}\{0\leq y’ < \texttt{kernel.rows}}}} \texttt{kernel} (x’,y’)* \texttt{src} (x+x’- \texttt{anchor.x} ,y+y’- \texttt{anchor.y} )

    That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using #flip and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

    The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

    Declaration

    Objective-C

    + (void)filter2D:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              ddepth:(int)ddepth
              kernel:(nonnull Mat *)kernel
              anchor:(nonnull Point2i *)anchor
               delta:(double)delta;

    Swift

    class func filter2D(src: Mat, dst: Mat, ddepth: Int32, kernel: Mat, anchor: Point2i, delta: Double)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    desired depth of the destination image, see REF: filter_depths “combinations”

    kernel

    convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually.

    anchor

    anchor of the kernel that indicates the relative position of a filtered point within the kernel; the anchor should lie within the kernel; default value (-1,-1) means that the anchor is at the kernel center.

    delta

    optional value added to the filtered pixels before storing them in dst.

  • Convolves an image with the kernel.

    The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

    The function does actually compute correlation, not the convolution:

    \texttt{dst} (x,y) = \sum _{ \substack{0\leq x’ < \texttt{kernel.cols}\{0\leq y’ < \texttt{kernel.rows}}}} \texttt{kernel} (x’,y’)* \texttt{src} (x+x’- \texttt{anchor.x} ,y+y’- \texttt{anchor.y} )

    That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using #flip and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

    The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

    Declaration

    Objective-C

    + (void)filter2D:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              ddepth:(int)ddepth
              kernel:(nonnull Mat *)kernel
              anchor:(nonnull Point2i *)anchor;

    Swift

    class func filter2D(src: Mat, dst: Mat, ddepth: Int32, kernel: Mat, anchor: Point2i)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    desired depth of the destination image, see REF: filter_depths “combinations”

    kernel

    convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually.

    anchor

    anchor of the kernel that indicates the relative position of a filtered point within the kernel; the anchor should lie within the kernel; default value (-1,-1) means that the anchor is at the kernel center.

  • Convolves an image with the kernel.

    The function applies an arbitrary linear filter to an image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values according to the specified border mode.

    The function does actually compute correlation, not the convolution:

    \texttt{dst} (x,y) = \sum _{ \substack{0\leq x’ < \texttt{kernel.cols}\{0\leq y’ < \texttt{kernel.rows}}}} \texttt{kernel} (x’,y’)* \texttt{src} (x+x’- \texttt{anchor.x} ,y+y’- \texttt{anchor.y} )

    That is, the kernel is not mirrored around the anchor point. If you need a real convolution, flip the kernel using #flip and set the new anchor to (kernel.cols - anchor.x - 1, kernel.rows - anchor.y - 1).

    The function uses the DFT-based algorithm in case of sufficiently large kernels (~11 x 11 or larger) and the direct algorithm for small kernels.

    Declaration

    Objective-C

    + (void)filter2D:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              ddepth:(int)ddepth
              kernel:(nonnull Mat *)kernel;

    Swift

    class func filter2D(src: Mat, dst: Mat, ddepth: Int32, kernel: Mat)

    Parameters

    src

    input image.

    dst

    output image of the same size and the same number of channels as src.

    ddepth

    desired depth of the destination image, see REF: filter_depths “combinations”

    kernel

    convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually. the kernel; the anchor should lie within the kernel; default value (-1,-1) means that the anchor is at the kernel center.

  • Finds contours in a binary image.

    The function retrieves contours from the binary image using the algorithm CITE: Suzuki85 . The contours are a useful tool for shape analysis and object detection and recognition. See squares.cpp in the OpenCV sample directory.

    Note

    Since opencv 3.2 source image is not modified by this function.

    Declaration

    Objective-C

    + (void)findContours:(nonnull Mat *)image
                contours:
                    (nonnull NSMutableArray<NSMutableArray<Point2i *> *> *)contours
               hierarchy:(nonnull Mat *)hierarchy
                    mode:(RetrievalModes)mode
                  method:(ContourApproximationModes)method
                  offset:(nonnull Point2i *)offset;

    Swift

    class func findContours(image: Mat, contours: NSMutableArray, hierarchy: Mat, mode: RetrievalModes, method: ContourApproximationModes, offset: Point2i)

    Parameters

    image

    Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary . You can use #compare, #inRange, #threshold ,

    adaptiveThreshold, #Canny, and others to create a binary image out of a grayscale or color one.

    If mode equals to #RETR_CCOMP or #RETR_FLOODFILL, the input can also be a 32-bit integer image of labels (CV_32SC1).

    contours

    Detected contours. Each contour is stored as a vector of points (e.g. std::vectorstd::vector<cv::Point >).

    hierarchy

    Optional output vector (e.g. std::vectorcv::Vec4i), containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the elements hierarchy[i][0] , hierarchy[i][1] , hierarchy[i][2] , and hierarchy[i][3] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.

    mode

    Contour retrieval mode, see #RetrievalModes

    method

    Contour approximation method, see #ContourApproximationModes

    offset

    Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.

  • Finds contours in a binary image.

    The function retrieves contours from the binary image using the algorithm CITE: Suzuki85 . The contours are a useful tool for shape analysis and object detection and recognition. See squares.cpp in the OpenCV sample directory.

    Note

    Since opencv 3.2 source image is not modified by this function.

    Declaration

    Objective-C

    + (void)findContours:(nonnull Mat *)image
                contours:
                    (nonnull NSMutableArray<NSMutableArray<Point2i *> *> *)contours
               hierarchy:(nonnull Mat *)hierarchy
                    mode:(RetrievalModes)mode
                  method:(ContourApproximationModes)method;

    Swift

    class func findContours(image: Mat, contours: NSMutableArray, hierarchy: Mat, mode: RetrievalModes, method: ContourApproximationModes)

    Parameters

    image

    Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary . You can use #compare, #inRange, #threshold ,

    adaptiveThreshold, #Canny, and others to create a binary image out of a grayscale or color one.

    If mode equals to #RETR_CCOMP or #RETR_FLOODFILL, the input can also be a 32-bit integer image of labels (CV_32SC1).

    contours

    Detected contours. Each contour is stored as a vector of points (e.g. std::vectorstd::vector<cv::Point >).

    hierarchy

    Optional output vector (e.g. std::vectorcv::Vec4i), containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the elements hierarchy[i][0] , hierarchy[i][1] , hierarchy[i][2] , and hierarchy[i][3] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.

    mode

    Contour retrieval mode, see #RetrievalModes

    method

    Contour approximation method, see #ContourApproximationModes contours are extracted from the image ROI and then they should be analyzed in the whole image context.

  • Fits a line to a 2D or 3D point set.

    The function fitLine fits a line to a 2D or 3D point set by minimizing

    \sum_i \rho(r_i)
    where
    r_i
    is a distance between the
    i^{th}
    point, the line and
    \rho®
    is a distance function, one of the following:

    • DIST_L2
      \rho ® = r^2/2 \quad \text{(the simplest and the fastest least-squares method)}
    • DIST_L1
      \rho ® = r
    • DIST_L12
      \rho ® = 2 \cdot ( \sqrt{1 + \frac{r^2}{2}} - 1)
    • DIST_FAIR
      \rho \left (r \right ) = C^2 \cdot \left ( \frac{r}{C} - \log{\left(1 + \frac{r}{C}\right)} \right ) \quad \text{where} \quad C=1.3998
    • DIST_WELSCH
      \rho \left (r \right ) = \frac{C^2}{2} \cdot \left ( 1 - \exp{\left(-\left(\frac{r}{C}\right)^2\right)} \right ) \quad \text{where} \quad C=2.9846
    • DIST_HUBER
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} \rho ® = \fork{r^2/2}{if \(r < C\)}{C \cdot (r-C/2)}{otherwise} \quad \text{where} \quad C=1.345

    The algorithm is based on the M-estimator ( http://en.wikipedia.org/wiki/M-estimator ) technique that iteratively fits the line using the weighted least-squares algorithm. After each iteration the weights

    w_i
    are adjusted to be inversely proportional to
    \rho(r_i)
    .

    Declaration

    Objective-C

    + (void)fitLine:(nonnull Mat *)points
               line:(nonnull Mat *)line
           distType:(DistanceTypes)distType
              param:(double)param
               reps:(double)reps
               aeps:(double)aeps;

    Swift

    class func fitLine(points: Mat, line: Mat, distType: DistanceTypes, param: Double, reps: Double, aeps: Double)
  • Returns filter coefficients for computing spatial image derivatives.

    The function computes and returns the filter coefficients for spatial image derivatives. When ksize=FILTER_SCHARR, the Scharr

    3 \times 3
    kernels are generated (see #Scharr). Otherwise, Sobel kernels are generated (see #Sobel). The filters are normally passed to #sepFilter2D or to

    Declaration

    Objective-C

    + (void)getDerivKernels:(nonnull Mat *)kx
                         ky:(nonnull Mat *)ky
                         dx:(int)dx
                         dy:(int)dy
                      ksize:(int)ksize
                  normalize:(BOOL)normalize
                      ktype:(int)ktype;

    Swift

    class func getDerivKernels(kx: Mat, ky: Mat, dx: Int32, dy: Int32, ksize: Int32, normalize: Bool, ktype: Int32)

    Parameters

    kx

    Output matrix of row filter coefficients. It has the type ktype .

    ky

    Output matrix of column filter coefficients. It has the type ktype .

    dx

    Derivative order in respect of x.

    dy

    Derivative order in respect of y.

    ksize

    Aperture size. It can be FILTER_SCHARR, 1, 3, 5, or 7.

    normalize

    Flag indicating whether to normalize (scale down) the filter coefficients or not. Theoretically, the coefficients should have the denominator

    =2^{ksize*2-dx-dy-2}
    . If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize=false .

    ktype

    Type of filter coefficients. It can be CV_32f or CV_64F .

  • Returns filter coefficients for computing spatial image derivatives.

    The function computes and returns the filter coefficients for spatial image derivatives. When ksize=FILTER_SCHARR, the Scharr

    3 \times 3
    kernels are generated (see #Scharr). Otherwise, Sobel kernels are generated (see #Sobel). The filters are normally passed to #sepFilter2D or to

    Declaration

    Objective-C

    + (void)getDerivKernels:(nonnull Mat *)kx
                         ky:(nonnull Mat *)ky
                         dx:(int)dx
                         dy:(int)dy
                      ksize:(int)ksize
                  normalize:(BOOL)normalize;

    Swift

    class func getDerivKernels(kx: Mat, ky: Mat, dx: Int32, dy: Int32, ksize: Int32, normalize: Bool)

    Parameters

    kx

    Output matrix of row filter coefficients. It has the type ktype .

    ky

    Output matrix of column filter coefficients. It has the type ktype .

    dx

    Derivative order in respect of x.

    dy

    Derivative order in respect of y.

    ksize

    Aperture size. It can be FILTER_SCHARR, 1, 3, 5, or 7.

    normalize

    Flag indicating whether to normalize (scale down) the filter coefficients or not. Theoretically, the coefficients should have the denominator

    =2^{ksize*2-dx-dy-2}
    . If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize=false .

  • Returns filter coefficients for computing spatial image derivatives.

    The function computes and returns the filter coefficients for spatial image derivatives. When ksize=FILTER_SCHARR, the Scharr

    3 \times 3
    kernels are generated (see #Scharr). Otherwise, Sobel kernels are generated (see #Sobel). The filters are normally passed to #sepFilter2D or to

    Declaration

    Objective-C

    + (void)getDerivKernels:(nonnull Mat *)kx
                         ky:(nonnull Mat *)ky
                         dx:(int)dx
                         dy:(int)dy
                      ksize:(int)ksize;

    Swift

    class func getDerivKernels(kx: Mat, ky: Mat, dx: Int32, dy: Int32, ksize: Int32)

    Parameters

    kx

    Output matrix of row filter coefficients. It has the type ktype .

    ky

    Output matrix of column filter coefficients. It has the type ktype .

    dx

    Derivative order in respect of x.

    dy

    Derivative order in respect of y.

    ksize

    Aperture size. It can be FILTER_SCHARR, 1, 3, 5, or 7. Theoretically, the coefficients should have the denominator

    =2^{ksize*2-dx-dy-2}
    . If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize=false .

  • Retrieves a pixel rectangle from an image with sub-pixel accuracy.

    The function getRectSubPix extracts pixels from src:

    patch(x, y) = src(x + \texttt{center.x} - ( \texttt{dst.cols} -1)*0.5, y + \texttt{center.y} - ( \texttt{dst.rows} -1)*0.5)

    where the values of the pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multi-channel images is processed independently. Also the image should be a single channel or three channel image. While the center of the rectangle must be inside the image, parts of the rectangle may be outside.

    Declaration

    Objective-C

    + (void)getRectSubPix:(nonnull Mat *)image
                patchSize:(nonnull Size2i *)patchSize
                   center:(nonnull Point2f *)center
                    patch:(nonnull Mat *)patch
                patchType:(int)patchType;

    Swift

    class func getRectSubPix(image: Mat, patchSize: Size2i, center: Point2f, patch: Mat, patchType: Int32)

    Parameters

    image

    Source image.

    patchSize

    Size of the extracted patch.

    center

    Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image.

    patch

    Extracted patch that has the size patchSize and the same number of channels as src .

    patchType

    Depth of the extracted pixels. By default, they have the same depth as src .

  • Retrieves a pixel rectangle from an image with sub-pixel accuracy.

    The function getRectSubPix extracts pixels from src:

    patch(x, y) = src(x + \texttt{center.x} - ( \texttt{dst.cols} -1)*0.5, y + \texttt{center.y} - ( \texttt{dst.rows} -1)*0.5)

    where the values of the pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multi-channel images is processed independently. Also the image should be a single channel or three channel image. While the center of the rectangle must be inside the image, parts of the rectangle may be outside.

    Declaration

    Objective-C

    + (void)getRectSubPix:(nonnull Mat *)image
                patchSize:(nonnull Size2i *)patchSize
                   center:(nonnull Point2f *)center
                    patch:(nonnull Mat *)patch;

    Swift

    class func getRectSubPix(image: Mat, patchSize: Size2i, center: Point2f, patch: Mat)

    Parameters

    image

    Source image.

    patchSize

    Size of the extracted patch.

    center

    Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image.

    patch

    Extracted patch that has the size patchSize and the same number of channels as src .

  • Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(Mat*)image corners:(NSMutableArray<Point2i*>*)corners maxCorners:(int)maxCorners qualityLevel:(double)qualityLevel minDistance:(double)minDistance mask:(Mat*)mask blockSize:(int)blockSize gradientSize:(int)gradientSize useHarrisDetector:(BOOL)useHarrisDetector k:(double)k NS_SWIFT_NAME(goodFeaturesToTrack(image:corners:maxCorners:qualityLevel:minDistance:mask:blockSize:gradientSize:useHarrisDetector:k:));

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat, blockSize: Int32, gradientSize: Int32, useHarrisDetector: Bool, k: Double)
  • Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(Mat*)image corners:(NSMutableArray<Point2i*>*)corners maxCorners:(int)maxCorners qualityLevel:(double)qualityLevel minDistance:(double)minDistance mask:(Mat*)mask blockSize:(int)blockSize gradientSize:(int)gradientSize useHarrisDetector:(BOOL)useHarrisDetector NS_SWIFT_NAME(goodFeaturesToTrack(image:corners:maxCorners:qualityLevel:minDistance:mask:blockSize:gradientSize:useHarrisDetector:));

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat, blockSize: Int32, gradientSize: Int32, useHarrisDetector: Bool)
  • Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(Mat*)image corners:(NSMutableArray<Point2i*>*)corners maxCorners:(int)maxCorners qualityLevel:(double)qualityLevel minDistance:(double)minDistance mask:(Mat*)mask blockSize:(int)blockSize gradientSize:(int)gradientSize NS_SWIFT_NAME(goodFeaturesToTrack(image:corners:maxCorners:qualityLevel:minDistance:mask:blockSize:gradientSize:));

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat, blockSize: Int32, gradientSize: Int32)
  • Determines strong corners on an image.

    The function finds the most prominent corners in the image or in the specified image region, as described in CITE: Shi94

    • Function calculates the corner quality measure at every source image pixel using the #cornerMinEigenVal or #cornerHarris .
    • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
    • The corners with the minimal eigenvalue less than
      \texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)
      are rejected.
    • The remaining corners are sorted by the quality measure in the descending order.
    • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

    The function can be used to initialize a point-based tracker of an object.

    Note

    If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .

    Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(nonnull Mat *)image
                        corners:(nonnull NSMutableArray<Point2i *> *)corners
                     maxCorners:(int)maxCorners
                   qualityLevel:(double)qualityLevel
                    minDistance:(double)minDistance
                           mask:(nonnull Mat *)mask
                      blockSize:(int)blockSize
              useHarrisDetector:(BOOL)useHarrisDetector
                              k:(double)k;

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat, blockSize: Int32, useHarrisDetector: Bool, k: Double)
  • Determines strong corners on an image.

    The function finds the most prominent corners in the image or in the specified image region, as described in CITE: Shi94

    • Function calculates the corner quality measure at every source image pixel using the #cornerMinEigenVal or #cornerHarris .
    • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
    • The corners with the minimal eigenvalue less than
      \texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)
      are rejected.
    • The remaining corners are sorted by the quality measure in the descending order.
    • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

    The function can be used to initialize a point-based tracker of an object.

    Note

    If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .

    Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(nonnull Mat *)image
                        corners:(nonnull NSMutableArray<Point2i *> *)corners
                     maxCorners:(int)maxCorners
                   qualityLevel:(double)qualityLevel
                    minDistance:(double)minDistance
                           mask:(nonnull Mat *)mask
                      blockSize:(int)blockSize
              useHarrisDetector:(BOOL)useHarrisDetector;

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat, blockSize: Int32, useHarrisDetector: Bool)
  • Determines strong corners on an image.

    The function finds the most prominent corners in the image or in the specified image region, as described in CITE: Shi94

    • Function calculates the corner quality measure at every source image pixel using the #cornerMinEigenVal or #cornerHarris .
    • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
    • The corners with the minimal eigenvalue less than
      \texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)
      are rejected.
    • The remaining corners are sorted by the quality measure in the descending order.
    • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

    The function can be used to initialize a point-based tracker of an object.

    Note

    If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .

    Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(nonnull Mat *)image
                        corners:(nonnull NSMutableArray<Point2i *> *)corners
                     maxCorners:(int)maxCorners
                   qualityLevel:(double)qualityLevel
                    minDistance:(double)minDistance
                           mask:(nonnull Mat *)mask
                      blockSize:(int)blockSize;

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat, blockSize: Int32)
  • Determines strong corners on an image.

    The function finds the most prominent corners in the image or in the specified image region, as described in CITE: Shi94

    • Function calculates the corner quality measure at every source image pixel using the #cornerMinEigenVal or #cornerHarris .
    • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
    • The corners with the minimal eigenvalue less than
      \texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)
      are rejected.
    • The remaining corners are sorted by the quality measure in the descending order.
    • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

    The function can be used to initialize a point-based tracker of an object.

    Note

    If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .

    Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(nonnull Mat *)image
                        corners:(nonnull NSMutableArray<Point2i *> *)corners
                     maxCorners:(int)maxCorners
                   qualityLevel:(double)qualityLevel
                    minDistance:(double)minDistance
                           mask:(nonnull Mat *)mask;

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double, mask: Mat)
  • Determines strong corners on an image.

    The function finds the most prominent corners in the image or in the specified image region, as described in CITE: Shi94

    • Function calculates the corner quality measure at every source image pixel using the #cornerMinEigenVal or #cornerHarris .
    • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
    • The corners with the minimal eigenvalue less than
      \texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)
      are rejected.
    • The remaining corners are sorted by the quality measure in the descending order.
    • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

    The function can be used to initialize a point-based tracker of an object.

    Note

    If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .

    Declaration

    Objective-C

    + (void)goodFeaturesToTrack:(nonnull Mat *)image
                        corners:(nonnull NSMutableArray<Point2i *> *)corners
                     maxCorners:(int)maxCorners
                   qualityLevel:(double)qualityLevel
                    minDistance:(double)minDistance;

    Swift

    class func goodFeaturesToTrack(image: Mat, corners: NSMutableArray, maxCorners: Int32, qualityLevel: Double, minDistance: Double)
  • Runs the GrabCut algorithm.

    The function implements the GrabCut image segmentation algorithm.

    Declaration

    Objective-C

    + (void)grabCut:(nonnull Mat *)img
               mask:(nonnull Mat *)mask
               rect:(nonnull Rect2i *)rect
           bgdModel:(nonnull Mat *)bgdModel
           fgdModel:(nonnull Mat *)fgdModel
          iterCount:(int)iterCount
               mode:(int)mode;

    Swift

    class func grabCut(img: Mat, mask: Mat, rect: Rect2i, bgdModel: Mat, fgdModel: Mat, iterCount: Int32, mode: Int32)

    Parameters

    img

    Input 8-bit 3-channel image.

    mask

    Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to #GC_INIT_WITH_RECT. Its elements may have one of the #GrabCutClasses.

    rect

    ROI containing a segmented object. The pixels outside of the ROI are marked as “obvious background”. The parameter is only used when mode==#GC_INIT_WITH_RECT .

    bgdModel

    Temporary array for the background model. Do not modify it while you are processing the same image.

    fgdModel

    Temporary arrays for the foreground model. Do not modify it while you are processing the same image.

    iterCount

    Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==#GC_INIT_WITH_MASK or mode==GC_EVAL .

    mode

    Operation mode that could be one of the #GrabCutModes

  • Runs the GrabCut algorithm.

    The function implements the GrabCut image segmentation algorithm.

    Declaration

    Objective-C

    + (void)grabCut:(nonnull Mat *)img
               mask:(nonnull Mat *)mask
               rect:(nonnull Rect2i *)rect
           bgdModel:(nonnull Mat *)bgdModel
           fgdModel:(nonnull Mat *)fgdModel
          iterCount:(int)iterCount;

    Swift

    class func grabCut(img: Mat, mask: Mat, rect: Rect2i, bgdModel: Mat, fgdModel: Mat, iterCount: Int32)

    Parameters

    img

    Input 8-bit 3-channel image.

    mask

    Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to #GC_INIT_WITH_RECT. Its elements may have one of the #GrabCutClasses.

    rect

    ROI containing a segmented object. The pixels outside of the ROI are marked as “obvious background”. The parameter is only used when mode==#GC_INIT_WITH_RECT .

    bgdModel

    Temporary array for the background model. Do not modify it while you are processing the same image.

    fgdModel

    Temporary arrays for the foreground model. Do not modify it while you are processing the same image.

    iterCount

    Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==#GC_INIT_WITH_MASK or mode==GC_EVAL .

  • Calculates the integral of an image.

    The function calculates one or more integral images for the source image as follows:

    \texttt{sum} (X,Y) = \sum _{x<X,y<Y} \texttt{image} (x,y)

    \texttt{sqsum} (X,Y) = \sum _{x<X,y<Y} \texttt{image} (x,y)^2

    \texttt{tilted} (X,Y) = \sum _{y<Y,abs(x-X+1) \leq Y-y-1} \texttt{image} (x,y)

    Using these integral images, you can calculate sum, mean, and standard deviation over a specific up-right or rotated rectangular region of the image in a constant time, for example:

    \sum _{x_1 \leq x < x_2, \, y_1 \leq y < y_2} \texttt{image} (x,y) = \texttt{sum} (x_2,y_2)- \texttt{sum} (x_1,y_2)- \texttt{sum} (x_2,y_1)+ \texttt{sum} (x_1,y_1)

    It makes possible to do a fast blurring or fast block correlation with a variable window size, for example. In case of multi-channel images, sums for each channel are accumulated independently.

    As a practical example, the next figure shows the calculation of the integral of a straight rectangle Rect(3,3,3,2) and of a tilted rectangle Rect(5,1,2,3) . The selected pixels in the original image are shown, as well as the relative pixels in the integral images sum and tilted .

    integral calculation example

    Declaration

    Objective-C

    + (void)integral3:(nonnull Mat *)src
                  sum:(nonnull Mat *)sum
                sqsum:(nonnull Mat *)sqsum
               tilted:(nonnull Mat *)tilted
               sdepth:(int)sdepth
              sqdepth:(int)sqdepth;

    Swift

    class func integral(src: Mat, sum: Mat, sqsum: Mat, tilted: Mat, sdepth: Int32, sqdepth: Int32)

    Parameters

    src

    input image as

    W \times H
    , 8-bit or floating-point (32f or 64f).

    sum

    integral image as

    (W+1)\times (H+1)
    , 32-bit integer or floating-point (32f or 64f).

    sqsum

    integral image for squared pixel values; it is

    (W+1)\times (H+1)
    , double-precision floating-point (64f) array.

    tilted

    integral for the image rotated by 45 degrees; it is

    (W+1)\times (H+1)
    array with the same data type as sum.

    sdepth

    desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.

    sqdepth

    desired depth of the integral image of squared pixel values, CV_32F or CV_64F.

  • Calculates the integral of an image.

    The function calculates one or more integral images for the source image as follows:

    \texttt{sum} (X,Y) = \sum _{x<X,y<Y} \texttt{image} (x,y)

    \texttt{sqsum} (X,Y) = \sum _{x<X,y<Y} \texttt{image} (x,y)^2

    \texttt{tilted} (X,Y) = \sum _{y<Y,abs(x-X+1) \leq Y-y-1} \texttt{image} (x,y)

    Using these integral images, you can calculate sum, mean, and standard deviation over a specific up-right or rotated rectangular region of the image in a constant time, for example:

    \sum _{x_1 \leq x < x_2, \, y_1 \leq y < y_2} \texttt{image} (x,y) = \texttt{sum} (x_2,y_2)- \texttt{sum} (x_1,y_2)- \texttt{sum} (x_2,y_1)+ \texttt{sum} (x_1,y_1)

    It makes possible to do a fast blurring or fast block correlation with a variable window size, for example. In case of multi-channel images, sums for each channel are accumulated independently.

    As a practical example, the next figure shows the calculation of the integral of a straight rectangle Rect(3,3,3,2) and of a tilted rectangle Rect(5,1,2,3) . The selected pixels in the original image are shown, as well as the relative pixels in the integral images sum and tilted .

    integral calculation example

    Declaration

    Objective-C

    + (void)integral3:(nonnull Mat *)src
                  sum:(nonnull Mat *)sum
                sqsum:(nonnull Mat *)sqsum
               tilted:(nonnull Mat *)tilted
               sdepth:(int)sdepth;

    Swift

    class func integral(src: Mat, sum: Mat, sqsum: Mat, tilted: Mat, sdepth: Int32)

    Parameters

    src

    input image as

    W \times H
    , 8-bit or floating-point (32f or 64f).

    sum

    integral image as

    (W+1)\times (H+1)
    , 32-bit integer or floating-point (32f or 64f).

    sqsum

    integral image for squared pixel values; it is

    (W+1)\times (H+1)
    , double-precision floating-point (64f) array.

    tilted

    integral for the image rotated by 45 degrees; it is

    (W+1)\times (H+1)
    array with the same data type as sum.

    sdepth

    desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.

  • Calculates the integral of an image.

    The function calculates one or more integral images for the source image as follows:

    \texttt{sum} (X,Y) = \sum _{x<X,y<Y} \texttt{image} (x,y)

    \texttt{sqsum} (X,Y) = \sum _{x<X,y<Y} \texttt{image} (x,y)^2

    \texttt{tilted} (X,Y) = \sum _{y<Y,abs(x-X+1) \leq Y-y-1} \texttt{image} (x,y)

    Using these integral images, you can calculate sum, mean, and standard deviation over a specific up-right or rotated rectangular region of the image in a constant time, for example:

    \sum _{x_1 \leq x < x_2, \, y_1 \leq y < y_2} \texttt{image} (x,y) = \texttt{sum} (x_2,y_2)- \texttt{sum} (x_1,y_2)- \texttt{sum} (x_2,y_1)+ \texttt{sum} (x_1,y_1)

    It makes possible to do a fast blurring or fast block correlation with a variable window size, for example. In case of multi-channel images, sums for each channel are accumulated independently.

    As a practical example, the next figure shows the calculation of the integral of a straight rectangle Rect(3,3,3,2) and of a tilted rectangle Rect(5,1,2,3) . The selected pixels in the original image are shown, as well as the relative pixels in the integral images sum and tilted .

    integral calculation example

    Declaration

    Objective-C

    + (void)integral3:(nonnull Mat *)src
                  sum:(nonnull Mat *)sum
                sqsum:(nonnull Mat *)sqsum
               tilted:(nonnull Mat *)tilted;

    Swift

    class func integral(src: Mat, sum: Mat, sqsum: Mat, tilted: Mat)

    Parameters

    src

    input image as

    W \times H
    , 8-bit or floating-point (32f or 64f).

    sum

    integral image as

    (W+1)\times (H+1)
    , 32-bit integer or floating-point (32f or 64f).

    sqsum

    integral image for squared pixel values; it is

    (W+1)\times (H+1)
    , double-precision floating-point (64f) array.

    tilted

    integral for the image rotated by 45 degrees; it is

    (W+1)\times (H+1)
    array with the same data type as sum. CV_64F.

  • Declaration

    Objective-C

    + (void)integral2:(Mat*)src sum:(Mat*)sum sqsum:(Mat*)sqsum sdepth:(int)sdepth sqdepth:(int)sqdepth NS_SWIFT_NAME(integral(src:sum:sqsum:sdepth:sqdepth:));

    Swift

    class func integral(src: Mat, sum: Mat, sqsum: Mat, sdepth: Int32, sqdepth: Int32)
  • Declaration

    Objective-C

    + (void)integral2:(Mat*)src sum:(Mat*)sum sqsum:(Mat*)sqsum sdepth:(int)sdepth NS_SWIFT_NAME(integral(src:sum:sqsum:sdepth:));

    Swift

    class func integral(src: Mat, sum: Mat, sqsum: Mat, sdepth: Int32)
  • Declaration

    Objective-C

    + (void)integral2:(Mat*)src sum:(Mat*)sum sqsum:(Mat*)sqsum NS_SWIFT_NAME(integral(src:sum:sqsum:));

    Swift

    class func integral(src: Mat, sum: Mat, sqsum: Mat)
  • Declaration

    Objective-C

    + (void)integral:(Mat*)src sum:(Mat*)sum sdepth:(int)sdepth NS_SWIFT_NAME(integral(src:sum:sdepth:));

    Swift

    class func integral(src: Mat, sum: Mat, sdepth: Int32)
  • Declaration

    Objective-C

    + (void)integral:(Mat*)src sum:(Mat*)sum NS_SWIFT_NAME(integral(src:sum:));

    Swift

    class func integral(src: Mat, sum: Mat)
  • Inverts an affine transformation.

    The function computes an inverse affine transformation represented by

    2 \times 3
    matrix M:

    \begin{bmatrix} a_{11} & a_{12} & b_1 \ a_{21} & a_{22} & b_2 \end{bmatrix}

    The result is also a

    2 \times 3
    matrix of the same type as M.

    Declaration

    Objective-C

    + (void)invertAffineTransform:(nonnull Mat *)M iM:(nonnull Mat *)iM;

    Swift

    class func invertAffineTransform(M: Mat, iM: Mat)

    Parameters

    M

    Original affine transformation.

    iM

    Output reverse affine transformation.

  • Draws a line segment connecting two points.

    The function line draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image boundaries. For non-antialiased lines with integer coordinates, the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering.

    Declaration

    Objective-C

    + (void)line:(nonnull Mat *)img
              pt1:(nonnull Point2i *)pt1
              pt2:(nonnull Point2i *)pt2
            color:(nonnull Scalar *)color
        thickness:(int)thickness
         lineType:(LineTypes)lineType
            shift:(int)shift;

    Swift

    class func line(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    pt1

    First point of the line segment.

    pt2

    Second point of the line segment.

    color

    Line color.

    thickness

    Line thickness.

    lineType

    Type of the line. See #LineTypes.

    shift

    Number of fractional bits in the point coordinates.

  • Draws a line segment connecting two points.

    The function line draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image boundaries. For non-antialiased lines with integer coordinates, the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering.

    Declaration

    Objective-C

    + (void)line:(nonnull Mat *)img
              pt1:(nonnull Point2i *)pt1
              pt2:(nonnull Point2i *)pt2
            color:(nonnull Scalar *)color
        thickness:(int)thickness
         lineType:(LineTypes)lineType;

    Swift

    class func line(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image.

    pt1

    First point of the line segment.

    pt2

    Second point of the line segment.

    color

    Line color.

    thickness

    Line thickness.

    lineType

    Type of the line. See #LineTypes.

  • Draws a line segment connecting two points.

    The function line draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image boundaries. For non-antialiased lines with integer coordinates, the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering.

    Declaration

    Objective-C

    + (void)line:(nonnull Mat *)img
              pt1:(nonnull Point2i *)pt1
              pt2:(nonnull Point2i *)pt2
            color:(nonnull Scalar *)color
        thickness:(int)thickness;

    Swift

    class func line(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    pt1

    First point of the line segment.

    pt2

    Second point of the line segment.

    color

    Line color.

    thickness

    Line thickness.

  • Draws a line segment connecting two points.

    The function line draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image boundaries. For non-antialiased lines with integer coordinates, the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering.

    Declaration

    Objective-C

    + (void)line:(nonnull Mat *)img
             pt1:(nonnull Point2i *)pt1
             pt2:(nonnull Point2i *)pt2
           color:(nonnull Scalar *)color;

    Swift

    class func line(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar)

    Parameters

    img

    Image.

    pt1

    First point of the line segment.

    pt2

    Second point of the line segment.

    color

    Line color.

  • Deprecated

    Remaps an image to polar coordinates space.

    @deprecated This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags)

    Transform the source image using the following transformation (See REF: polar_remaps_reference_image “Polar remaps reference image c)”):

    \begin{array}{l} dst( \rho , \phi ) = src(x,y) \ dst.size() \leftarrow src.size() \end{array}

    where

    \begin{array}{l} I = (dx,dy) = (x - center.x,y - center.y) \ \rho = Kmag \cdot \texttt{magnitude} (I) ,\ \phi = angle \cdot \texttt{angle} (I) \end{array}

    and

    \begin{array}{l} Kx = src.cols / maxRadius \ Ky = src.rows / 2\Pi \end{array}

    @note

    • The function can not operate in-place.
    • To calculate magnitude and angle in degrees #cartToPolar is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.

    See

    cv::logPolar

    Declaration

    Objective-C

    + (void)linearPolar:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                 center:(nonnull Point2f *)center
              maxRadius:(double)maxRadius
                  flags:(int)flags;

    Swift

    class func linearPolar(src: Mat, dst: Mat, center: Point2f, maxRadius: Double, flags: Int32)
  • Deprecated

    Remaps an image to semilog-polar coordinates space.

    @deprecated This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG);

    Transform the source image using the following transformation (See REF: polar_remaps_reference_image “Polar remaps reference image d)”):

    \begin{array}{l} dst( \rho , \phi ) = src(x,y) \ dst.size() \leftarrow src.size() \end{array}

    where

    \begin{array}{l} I = (dx,dy) = (x - center.x,y - center.y) \ \rho = M \cdot log_e(\texttt{magnitude} (I)) ,\ \phi = Kangle \cdot \texttt{angle} (I) \ \end{array}

    and

    \begin{array}{l} M = src.cols / log_e(maxRadius) \ Kangle = src.rows / 2\Pi \ \end{array}

    The function emulates the human “foveal” vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth.

    @note

    • The function can not operate in-place.
    • To calculate magnitude and angle in degrees #cartToPolar is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.

    See

    cv::linearPolar

    Declaration

    Objective-C

    + (void)logPolar:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              center:(nonnull Point2f *)center
                   M:(double)M
               flags:(int)flags;

    Swift

    class func logPolar(src: Mat, dst: Mat, center: Point2f, M: Double, flags: Int32)
  • Compares a template against overlapped image regions.

    The function slides through image , compares the overlapped patches of size

    w \times h
    against templ using the specified method and stores the comparison results in result . #TemplateMatchModes describes the formulae for the available comparison methods (
    I
    denotes image,
    T
    template,
    R
    result,
    M
    the optional mask ). The summation is done over template and/or the image patch:
    x’ = 0…w-1, y’ = 0…h-1

    After the function finishes the comparison, the best matches can be found as global minimums (when #TM_SQDIFF was used) or maximums (when #TM_CCORR or #TM_CCOEFF was used) using the #minMaxLoc function. In case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels and separate mean values are used for each channel. That is, the function can take a color template and a color image. The result will still be a single-channel image, which is easier to analyze.

    Declaration

    Objective-C

    + (void)matchTemplate:(nonnull Mat *)image
                    templ:(nonnull Mat *)templ
                   result:(nonnull Mat *)result
                   method:(TemplateMatchModes)method
                     mask:(nonnull Mat *)mask;

    Swift

    class func matchTemplate(image: Mat, templ: Mat, result: Mat, method: TemplateMatchModes, mask: Mat)

    Parameters

    image

    Image where the search is running. It must be 8-bit or 32-bit floating-point.

    templ

    Searched template. It must be not greater than the source image and have the same data type.

    result

    Map of comparison results. It must be single-channel 32-bit floating-point. If image is

    W \times H
    and templ is
    w \times h
    , then result is
    (W-w+1) \times (H-h+1)
    .

    method

    Parameter specifying the comparison method, see #TemplateMatchModes

    mask

    Optional mask. It must have the same size as templ. It must either have the same number of channels as template or only one channel, which is then used for all template and image channels. If the data type is #CV_8U, the mask is interpreted as a binary mask, meaning only elements where mask is nonzero are used and are kept unchanged independent of the actual mask value (weight equals 1). For data tpye #CV_32F, the mask values are used as weights. The exact formulas are documented in #TemplateMatchModes.

  • Compares a template against overlapped image regions.

    The function slides through image , compares the overlapped patches of size

    w \times h
    against templ using the specified method and stores the comparison results in result . #TemplateMatchModes describes the formulae for the available comparison methods (
    I
    denotes image,
    T
    template,
    R
    result,
    M
    the optional mask ). The summation is done over template and/or the image patch:
    x’ = 0…w-1, y’ = 0…h-1

    After the function finishes the comparison, the best matches can be found as global minimums (when #TM_SQDIFF was used) or maximums (when #TM_CCORR or #TM_CCOEFF was used) using the #minMaxLoc function. In case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels and separate mean values are used for each channel. That is, the function can take a color template and a color image. The result will still be a single-channel image, which is easier to analyze.

    Declaration

    Objective-C

    + (void)matchTemplate:(nonnull Mat *)image
                    templ:(nonnull Mat *)templ
                   result:(nonnull Mat *)result
                   method:(TemplateMatchModes)method;

    Swift

    class func matchTemplate(image: Mat, templ: Mat, result: Mat, method: TemplateMatchModes)

    Parameters

    image

    Image where the search is running. It must be 8-bit or 32-bit floating-point.

    templ

    Searched template. It must be not greater than the source image and have the same data type.

    result

    Map of comparison results. It must be single-channel 32-bit floating-point. If image is

    W \times H
    and templ is
    w \times h
    , then result is
    (W-w+1) \times (H-h+1)
    .

    method

    Parameter specifying the comparison method, see #TemplateMatchModes of channels as template or only one channel, which is then used for all template and image channels. If the data type is #CV_8U, the mask is interpreted as a binary mask, meaning only elements where mask is nonzero are used and are kept unchanged independent of the actual mask value (weight equals 1). For data tpye #CV_32F, the mask values are used as weights. The exact formulas are documented in #TemplateMatchModes.

  • Blurs an image using the median filter.

    The function smoothes an image using the median filter with the

    \texttt{ksize} \times \texttt{ksize}
    aperture. Each channel of a multi-channel image is processed independently. In-place operation is supported.

    Note

    The median filter uses #BORDER_REPLICATE internally to cope with border pixels, see #BorderTypes

    Declaration

    Objective-C

    + (void)medianBlur:(nonnull Mat *)src dst:(nonnull Mat *)dst ksize:(int)ksize;

    Swift

    class func medianBlur(src: Mat, dst: Mat, ksize: Int32)

    Parameters

    src

    input 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be CV_8U, CV_16U, or CV_32F, for larger aperture sizes, it can only be CV_8U.

    dst

    destination array of the same size and type as src.

    ksize

    aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 …

  • Finds a circle of the minimum area enclosing a 2D point set.

    The function finds the minimal enclosing circle of a 2D point set using an iterative algorithm.

    Declaration

    Objective-C

    + (void)minEnclosingCircle:(nonnull NSArray<Point2f *> *)points
                        center:(nonnull Point2f *)center
                        radius:(nonnull float *)radius;

    Swift

    class func minEnclosingCircle(points: [Point2f], center: Point2f, radius: UnsafeMutablePointer<Float>)

    Parameters

    points

    Input vector of 2D points, stored in std::vector<> or Mat

    center

    Output center of the circle.

    radius

    Output radius of the circle.

  • Performs advanced morphological transformations.

    The function cv::morphologyEx can perform advanced morphological transformations using an erosion and dilation as basic operations.

    Any of the operations can be done in-place. In case of multi-channel images, each channel is processed independently.

    Note

    The number of iterations is the number of times erosion or dilatation operation will be applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to apply successively: erode -> erode -> dilate -> dilate (and not erode -> dilate -> erode -> dilate).

    Declaration

    Objective-C

    + (void)morphologyEx:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                      op:(MorphTypes)op
                  kernel:(nonnull Mat *)kernel
                  anchor:(nonnull Point2i *)anchor
              iterations:(int)iterations
              borderType:(BorderTypes)borderType
             borderValue:(nonnull Scalar *)borderValue;

    Swift

    class func morphologyEx(src: Mat, dst: Mat, op: MorphTypes, kernel: Mat, anchor: Point2i, iterations: Int32, borderType: BorderTypes, borderValue: Scalar)

    Parameters

    src

    Source image. The number of channels can be arbitrary. The depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    Destination image of the same size and type as source image.

    op

    Type of a morphological operation, see #MorphTypes

    kernel

    Structuring element. It can be created using #getStructuringElement.

    anchor

    Anchor position with the kernel. Negative values mean that the anchor is at the kernel center.

    iterations

    Number of times erosion and dilation are applied.

    borderType

    Pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

    borderValue

    Border value in case of a constant border. The default value has a special meaning.

  • Performs advanced morphological transformations.

    The function cv::morphologyEx can perform advanced morphological transformations using an erosion and dilation as basic operations.

    Any of the operations can be done in-place. In case of multi-channel images, each channel is processed independently.

    Note

    The number of iterations is the number of times erosion or dilatation operation will be applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to apply successively: erode -> erode -> dilate -> dilate (and not erode -> dilate -> erode -> dilate).

    Declaration

    Objective-C

    + (void)morphologyEx:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                      op:(MorphTypes)op
                  kernel:(nonnull Mat *)kernel
                  anchor:(nonnull Point2i *)anchor
              iterations:(int)iterations
              borderType:(BorderTypes)borderType;

    Swift

    class func morphologyEx(src: Mat, dst: Mat, op: MorphTypes, kernel: Mat, anchor: Point2i, iterations: Int32, borderType: BorderTypes)

    Parameters

    src

    Source image. The number of channels can be arbitrary. The depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    Destination image of the same size and type as source image.

    op

    Type of a morphological operation, see #MorphTypes

    kernel

    Structuring element. It can be created using #getStructuringElement.

    anchor

    Anchor position with the kernel. Negative values mean that the anchor is at the kernel center.

    iterations

    Number of times erosion and dilation are applied.

    borderType

    Pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported. meaning.

  • Performs advanced morphological transformations.

    The function cv::morphologyEx can perform advanced morphological transformations using an erosion and dilation as basic operations.

    Any of the operations can be done in-place. In case of multi-channel images, each channel is processed independently.

    Note

    The number of iterations is the number of times erosion or dilatation operation will be applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to apply successively: erode -> erode -> dilate -> dilate (and not erode -> dilate -> erode -> dilate).

    Declaration

    Objective-C

    + (void)morphologyEx:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                      op:(MorphTypes)op
                  kernel:(nonnull Mat *)kernel
                  anchor:(nonnull Point2i *)anchor
              iterations:(int)iterations;

    Swift

    class func morphologyEx(src: Mat, dst: Mat, op: MorphTypes, kernel: Mat, anchor: Point2i, iterations: Int32)

    Parameters

    src

    Source image. The number of channels can be arbitrary. The depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    Destination image of the same size and type as source image.

    op

    Type of a morphological operation, see #MorphTypes

    kernel

    Structuring element. It can be created using #getStructuringElement.

    anchor

    Anchor position with the kernel. Negative values mean that the anchor is at the kernel center.

    iterations

    Number of times erosion and dilation are applied. meaning.

  • Performs advanced morphological transformations.

    The function cv::morphologyEx can perform advanced morphological transformations using an erosion and dilation as basic operations.

    Any of the operations can be done in-place. In case of multi-channel images, each channel is processed independently.

    Note

    The number of iterations is the number of times erosion or dilatation operation will be applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to apply successively: erode -> erode -> dilate -> dilate (and not erode -> dilate -> erode -> dilate).

    Declaration

    Objective-C

    + (void)morphologyEx:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                      op:(MorphTypes)op
                  kernel:(nonnull Mat *)kernel
                  anchor:(nonnull Point2i *)anchor;

    Swift

    class func morphologyEx(src: Mat, dst: Mat, op: MorphTypes, kernel: Mat, anchor: Point2i)

    Parameters

    src

    Source image. The number of channels can be arbitrary. The depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    Destination image of the same size and type as source image.

    op

    Type of a morphological operation, see #MorphTypes

    kernel

    Structuring element. It can be created using #getStructuringElement.

    anchor

    Anchor position with the kernel. Negative values mean that the anchor is at the kernel center. meaning.

  • Performs advanced morphological transformations.

    The function cv::morphologyEx can perform advanced morphological transformations using an erosion and dilation as basic operations.

    Any of the operations can be done in-place. In case of multi-channel images, each channel is processed independently.

    Note

    The number of iterations is the number of times erosion or dilatation operation will be applied. For instance, an opening operation (#MORPH_OPEN) with two iterations is equivalent to apply successively: erode -> erode -> dilate -> dilate (and not erode -> dilate -> erode -> dilate).

    Declaration

    Objective-C

    + (void)morphologyEx:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                      op:(MorphTypes)op
                  kernel:(nonnull Mat *)kernel;

    Swift

    class func morphologyEx(src: Mat, dst: Mat, op: MorphTypes, kernel: Mat)

    Parameters

    src

    Source image. The number of channels can be arbitrary. The depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

    dst

    Destination image of the same size and type as source image.

    op

    Type of a morphological operation, see #MorphTypes

    kernel

    Structuring element. It can be created using #getStructuringElement. kernel center. meaning.

  • Draws several polygonal curves.

    The function cv::polylines draws one or more polygonal curves.

    Declaration

    Objective-C

    + (void)polylines:(nonnull Mat *)img
                  pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
             isClosed:(BOOL)isClosed
                color:(nonnull Scalar *)color
            thickness:(int)thickness
             lineType:(LineTypes)lineType
                shift:(int)shift;

    Swift

    class func polylines(img: Mat, pts: [[Point2i]], isClosed: Bool, color: Scalar, thickness: Int32, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    pts

    Array of polygonal curves.

    isClosed

    Flag indicating whether the drawn polylines are closed or not. If they are closed, the function draws a line from the last vertex of each curve to its first vertex.

    color

    Polyline color.

    thickness

    Thickness of the polyline edges.

    lineType

    Type of the line segments. See #LineTypes

    shift

    Number of fractional bits in the vertex coordinates.

  • Draws several polygonal curves.

    The function cv::polylines draws one or more polygonal curves.

    Declaration

    Objective-C

    + (void)polylines:(nonnull Mat *)img
                  pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
             isClosed:(BOOL)isClosed
                color:(nonnull Scalar *)color
            thickness:(int)thickness
             lineType:(LineTypes)lineType;

    Swift

    class func polylines(img: Mat, pts: [[Point2i]], isClosed: Bool, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image.

    pts

    Array of polygonal curves.

    isClosed

    Flag indicating whether the drawn polylines are closed or not. If they are closed, the function draws a line from the last vertex of each curve to its first vertex.

    color

    Polyline color.

    thickness

    Thickness of the polyline edges.

    lineType

    Type of the line segments. See #LineTypes

  • Draws several polygonal curves.

    The function cv::polylines draws one or more polygonal curves.

    Declaration

    Objective-C

    + (void)polylines:(nonnull Mat *)img
                  pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
             isClosed:(BOOL)isClosed
                color:(nonnull Scalar *)color
            thickness:(int)thickness;

    Swift

    class func polylines(img: Mat, pts: [[Point2i]], isClosed: Bool, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    pts

    Array of polygonal curves.

    isClosed

    Flag indicating whether the drawn polylines are closed or not. If they are closed, the function draws a line from the last vertex of each curve to its first vertex.

    color

    Polyline color.

    thickness

    Thickness of the polyline edges.

  • Draws several polygonal curves.

    The function cv::polylines draws one or more polygonal curves.

    Declaration

    Objective-C

    + (void)polylines:(nonnull Mat *)img
                  pts:(nonnull NSArray<NSArray<Point2i *> *> *)pts
             isClosed:(BOOL)isClosed
                color:(nonnull Scalar *)color;

    Swift

    class func polylines(img: Mat, pts: [[Point2i]], isClosed: Bool, color: Scalar)

    Parameters

    img

    Image.

    pts

    Array of polygonal curves.

    isClosed

    Flag indicating whether the drawn polylines are closed or not. If they are closed, the function draws a line from the last vertex of each curve to its first vertex.

    color

    Polyline color.

  • Calculates a feature map for corner detection.

    The function calculates the complex spatial derivative-based function of the source image

    \texttt{dst} = (D_x \texttt{src} )^2 \cdot D_{yy} \texttt{src} + (D_y \texttt{src} )^2 \cdot D_{xx} \texttt{src} - 2 D_x \texttt{src} \cdot D_y \texttt{src} \cdot D_{xy} \texttt{src}

    where

    D_x
    ,
    D_y
    are the first image derivatives,
    D_{xx}
    ,
    D_{yy}
    are the second image derivatives, and
    D_{xy}
    is the mixed derivative.

    The corners can be found as local maximums of the functions, as shown below:

     Mat corners, dilated_corners;
     preCornerDetect(image, corners, 3);
     // dilation with 3x3 rectangular structuring element
     dilate(corners, dilated_corners, Mat(), 1);
     Mat corner_mask = corners == dilated_corners;
    

    Declaration

    Objective-C

    + (void)preCornerDetect:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                      ksize:(int)ksize
                 borderType:(BorderTypes)borderType;

    Swift

    class func preCornerDetect(src: Mat, dst: Mat, ksize: Int32, borderType: BorderTypes)
  • Calculates a feature map for corner detection.

    The function calculates the complex spatial derivative-based function of the source image

    \texttt{dst} = (D_x \texttt{src} )^2 \cdot D_{yy} \texttt{src} + (D_y \texttt{src} )^2 \cdot D_{xx} \texttt{src} - 2 D_x \texttt{src} \cdot D_y \texttt{src} \cdot D_{xy} \texttt{src}

    where

    D_x
    ,
    D_y
    are the first image derivatives,
    D_{xx}
    ,
    D_{yy}
    are the second image derivatives, and
    D_{xy}
    is the mixed derivative.

    The corners can be found as local maximums of the functions, as shown below:

     Mat corners, dilated_corners;
     preCornerDetect(image, corners, 3);
     // dilation with 3x3 rectangular structuring element
     dilate(corners, dilated_corners, Mat(), 1);
     Mat corner_mask = corners == dilated_corners;
    

    Declaration

    Objective-C

    + (void)preCornerDetect:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                      ksize:(int)ksize;

    Swift

    class func preCornerDetect(src: Mat, dst: Mat, ksize: Int32)
  • Draws a text string.

    The function cv::putText renders the specified text string in the image. Symbols that cannot be rendered using the specified font are replaced by question marks. See #getTextSize for a text rendering code example.

    Declaration

    Objective-C

    + (void)putText:(nonnull Mat *)img
                    text:(nonnull NSString *)text
                     org:(nonnull Point2i *)org
                fontFace:(HersheyFonts)fontFace
               fontScale:(double)fontScale
                   color:(nonnull Scalar *)color
               thickness:(int)thickness
                lineType:(LineTypes)lineType
        bottomLeftOrigin:(BOOL)bottomLeftOrigin;

    Swift

    class func putText(img: Mat, text: String, org: Point2i, fontFace: HersheyFonts, fontScale: Double, color: Scalar, thickness: Int32, lineType: LineTypes, bottomLeftOrigin: Bool)

    Parameters

    img

    Image.

    text

    Text string to be drawn.

    org

    Bottom-left corner of the text string in the image.

    fontFace

    Font type, see #HersheyFonts.

    fontScale

    Font scale factor that is multiplied by the font-specific base size.

    color

    Text color.

    thickness

    Thickness of the lines used to draw a text.

    lineType

    Line type. See #LineTypes

    bottomLeftOrigin

    When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.

  • Draws a text string.

    The function cv::putText renders the specified text string in the image. Symbols that cannot be rendered using the specified font are replaced by question marks. See #getTextSize for a text rendering code example.

    Declaration

    Objective-C

    + (void)putText:(nonnull Mat *)img
               text:(nonnull NSString *)text
                org:(nonnull Point2i *)org
           fontFace:(HersheyFonts)fontFace
          fontScale:(double)fontScale
              color:(nonnull Scalar *)color
          thickness:(int)thickness
           lineType:(LineTypes)lineType;

    Swift

    class func putText(img: Mat, text: String, org: Point2i, fontFace: HersheyFonts, fontScale: Double, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image.

    text

    Text string to be drawn.

    org

    Bottom-left corner of the text string in the image.

    fontFace

    Font type, see #HersheyFonts.

    fontScale

    Font scale factor that is multiplied by the font-specific base size.

    color

    Text color.

    thickness

    Thickness of the lines used to draw a text.

    lineType

    Line type. See #LineTypes it is at the top-left corner.

  • Draws a text string.

    The function cv::putText renders the specified text string in the image. Symbols that cannot be rendered using the specified font are replaced by question marks. See #getTextSize for a text rendering code example.

    Declaration

    Objective-C

    + (void)putText:(nonnull Mat *)img
               text:(nonnull NSString *)text
                org:(nonnull Point2i *)org
           fontFace:(HersheyFonts)fontFace
          fontScale:(double)fontScale
              color:(nonnull Scalar *)color
          thickness:(int)thickness;

    Swift

    class func putText(img: Mat, text: String, org: Point2i, fontFace: HersheyFonts, fontScale: Double, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    text

    Text string to be drawn.

    org

    Bottom-left corner of the text string in the image.

    fontFace

    Font type, see #HersheyFonts.

    fontScale

    Font scale factor that is multiplied by the font-specific base size.

    color

    Text color.

    thickness

    Thickness of the lines used to draw a text. it is at the top-left corner.

  • Draws a text string.

    The function cv::putText renders the specified text string in the image. Symbols that cannot be rendered using the specified font are replaced by question marks. See #getTextSize for a text rendering code example.

    Declaration

    Objective-C

    + (void)putText:(nonnull Mat *)img
               text:(nonnull NSString *)text
                org:(nonnull Point2i *)org
           fontFace:(HersheyFonts)fontFace
          fontScale:(double)fontScale
              color:(nonnull Scalar *)color;

    Swift

    class func putText(img: Mat, text: String, org: Point2i, fontFace: HersheyFonts, fontScale: Double, color: Scalar)

    Parameters

    img

    Image.

    text

    Text string to be drawn.

    org

    Bottom-left corner of the text string in the image.

    fontFace

    Font type, see #HersheyFonts.

    fontScale

    Font scale factor that is multiplied by the font-specific base size.

    color

    Text color. it is at the top-left corner.

  • Blurs an image and downsamples it.

    By default, size of the output image is computed as Size((src.cols+1)/2, (src.rows+1)/2), but in any case, the following conditions should be satisfied:

    \begin{array}{l} | \texttt{dstsize.width} *2-src.cols| \leq 2 \ | \texttt{dstsize.height} *2-src.rows| \leq 2 \end{array}

    The function performs the downsampling step of the Gaussian pyramid construction. First, it convolves the source image with the kernel:

    \frac{1}{256} \begin{bmatrix} 1 & 4 & 6 & 4 & 1 \ 4 & 16 & 24 & 16 & 4 \ 6 & 24 & 36 & 24 & 6 \ 4 & 16 & 24 & 16 & 4 \ 1 & 4 & 6 & 4 & 1 \end{bmatrix}

    Then, it downsamples the image by rejecting even rows and columns.

    Declaration

    Objective-C

    + (void)pyrDown:(nonnull Mat *)src
                dst:(nonnull Mat *)dst
            dstsize:(nonnull Size2i *)dstsize
         borderType:(BorderTypes)borderType;

    Swift

    class func pyrDown(src: Mat, dst: Mat, dstsize: Size2i, borderType: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image; it has the specified size and the same type as src.

    dstsize

    size of the output image.

    borderType

    Pixel extrapolation method, see #BorderTypes (#BORDER_CONSTANT isn’t supported)

  • Blurs an image and downsamples it.

    By default, size of the output image is computed as Size((src.cols+1)/2, (src.rows+1)/2), but in any case, the following conditions should be satisfied:

    \begin{array}{l} | \texttt{dstsize.width} *2-src.cols| \leq 2 \ | \texttt{dstsize.height} *2-src.rows| \leq 2 \end{array}

    The function performs the downsampling step of the Gaussian pyramid construction. First, it convolves the source image with the kernel:

    \frac{1}{256} \begin{bmatrix} 1 & 4 & 6 & 4 & 1 \ 4 & 16 & 24 & 16 & 4 \ 6 & 24 & 36 & 24 & 6 \ 4 & 16 & 24 & 16 & 4 \ 1 & 4 & 6 & 4 & 1 \end{bmatrix}

    Then, it downsamples the image by rejecting even rows and columns.

    Declaration

    Objective-C

    + (void)pyrDown:(nonnull Mat *)src
                dst:(nonnull Mat *)dst
            dstsize:(nonnull Size2i *)dstsize;

    Swift

    class func pyrDown(src: Mat, dst: Mat, dstsize: Size2i)

    Parameters

    src

    input image.

    dst

    output image; it has the specified size and the same type as src.

    dstsize

    size of the output image.

  • Blurs an image and downsamples it.

    By default, size of the output image is computed as Size((src.cols+1)/2, (src.rows+1)/2), but in any case, the following conditions should be satisfied:

    \begin{array}{l} | \texttt{dstsize.width} *2-src.cols| \leq 2 \ | \texttt{dstsize.height} *2-src.rows| \leq 2 \end{array}

    The function performs the downsampling step of the Gaussian pyramid construction. First, it convolves the source image with the kernel:

    \frac{1}{256} \begin{bmatrix} 1 & 4 & 6 & 4 & 1 \ 4 & 16 & 24 & 16 & 4 \ 6 & 24 & 36 & 24 & 6 \ 4 & 16 & 24 & 16 & 4 \ 1 & 4 & 6 & 4 & 1 \end{bmatrix}

    Then, it downsamples the image by rejecting even rows and columns.

    Declaration

    Objective-C

    + (void)pyrDown:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func pyrDown(src: Mat, dst: Mat)

    Parameters

    src

    input image.

    dst

    output image; it has the specified size and the same type as src.

  • Performs initial step of meanshift segmentation of an image.

    The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered “posterized” image with color gradients and fine-grain texture flattened. At every pixel (X,Y) of the input image (or down-sized input image, see below) the function executes meanshift iterations, that is, the pixel (X,Y) neighborhood in the joint space-color hyperspace is considered:

    (x,y): X- \texttt{sp} \le x \le X+ \texttt{sp} , Y- \texttt{sp} \le y \le Y+ \texttt{sp} , ||(R,G,B)-(r,g,b)|| \le \texttt{sr}

    where (R,G,B) and (r,g,b) are the vectors of color components at (X,Y) and (x,y), respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value (X’,Y’) and average color vector (R’,G’,B’) are found and they act as the neighborhood center on the next iteration:

    (X,Y)~(X’,Y’), (R,G,B)~(R’,G’,B’).

    After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):

    I(X,Y) <- (R*,G*,B*)

    When maxLevel > 0, the gaussian pyramid of maxLevel+1 levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than sr from the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper. Note that the results will be actually different from the ones obtained by running the meanshift procedure on the whole original image (i.e. when maxLevel==0).

    Declaration

    Objective-C

    + (void)pyrMeanShiftFiltering:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                               sp:(double)sp
                               sr:(double)sr
                         maxLevel:(int)maxLevel
                         termcrit:(nonnull TermCriteria *)termcrit;

    Swift

    class func pyrMeanShiftFiltering(src: Mat, dst: Mat, sp: Double, sr: Double, maxLevel: Int32, termcrit: TermCriteria)

    Parameters

    src

    The source 8-bit, 3-channel image.

    dst

    The destination image of the same format and the same size as the source.

    sp

    The spatial window radius.

    sr

    The color window radius.

    maxLevel

    Maximum level of the pyramid for the segmentation.

    termcrit

    Termination criteria: when to stop meanshift iterations.

  • Performs initial step of meanshift segmentation of an image.

    The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered “posterized” image with color gradients and fine-grain texture flattened. At every pixel (X,Y) of the input image (or down-sized input image, see below) the function executes meanshift iterations, that is, the pixel (X,Y) neighborhood in the joint space-color hyperspace is considered:

    (x,y): X- \texttt{sp} \le x \le X+ \texttt{sp} , Y- \texttt{sp} \le y \le Y+ \texttt{sp} , ||(R,G,B)-(r,g,b)|| \le \texttt{sr}

    where (R,G,B) and (r,g,b) are the vectors of color components at (X,Y) and (x,y), respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value (X’,Y’) and average color vector (R’,G’,B’) are found and they act as the neighborhood center on the next iteration:

    (X,Y)~(X’,Y’), (R,G,B)~(R’,G’,B’).

    After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):

    I(X,Y) <- (R*,G*,B*)

    When maxLevel > 0, the gaussian pyramid of maxLevel+1 levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than sr from the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper. Note that the results will be actually different from the ones obtained by running the meanshift procedure on the whole original image (i.e. when maxLevel==0).

    Declaration

    Objective-C

    + (void)pyrMeanShiftFiltering:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                               sp:(double)sp
                               sr:(double)sr
                         maxLevel:(int)maxLevel;

    Swift

    class func pyrMeanShiftFiltering(src: Mat, dst: Mat, sp: Double, sr: Double, maxLevel: Int32)

    Parameters

    src

    The source 8-bit, 3-channel image.

    dst

    The destination image of the same format and the same size as the source.

    sp

    The spatial window radius.

    sr

    The color window radius.

    maxLevel

    Maximum level of the pyramid for the segmentation.

  • Performs initial step of meanshift segmentation of an image.

    The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered “posterized” image with color gradients and fine-grain texture flattened. At every pixel (X,Y) of the input image (or down-sized input image, see below) the function executes meanshift iterations, that is, the pixel (X,Y) neighborhood in the joint space-color hyperspace is considered:

    (x,y): X- \texttt{sp} \le x \le X+ \texttt{sp} , Y- \texttt{sp} \le y \le Y+ \texttt{sp} , ||(R,G,B)-(r,g,b)|| \le \texttt{sr}

    where (R,G,B) and (r,g,b) are the vectors of color components at (X,Y) and (x,y), respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value (X’,Y’) and average color vector (R’,G’,B’) are found and they act as the neighborhood center on the next iteration:

    (X,Y)~(X’,Y’), (R,G,B)~(R’,G’,B’).

    After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):

    I(X,Y) <- (R*,G*,B*)

    When maxLevel > 0, the gaussian pyramid of maxLevel+1 levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than sr from the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper. Note that the results will be actually different from the ones obtained by running the meanshift procedure on the whole original image (i.e. when maxLevel==0).

    Declaration

    Objective-C

    + (void)pyrMeanShiftFiltering:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                               sp:(double)sp
                               sr:(double)sr;

    Swift

    class func pyrMeanShiftFiltering(src: Mat, dst: Mat, sp: Double, sr: Double)

    Parameters

    src

    The source 8-bit, 3-channel image.

    dst

    The destination image of the same format and the same size as the source.

    sp

    The spatial window radius.

    sr

    The color window radius.

  • Upsamples an image and then blurs it.

    By default, size of the output image is computed as Size(src.cols\*2, (src.rows\*2), but in any case, the following conditions should be satisfied:

    \begin{array}{l} | \texttt{dstsize.width} -src.cols*2| \leq ( \texttt{dstsize.width} \mod 2) \ | \texttt{dstsize.height} -src.rows*2| \leq ( \texttt{dstsize.height} \mod 2) \end{array}

    The function performs the upsampling step of the Gaussian pyramid construction, though it can actually be used to construct the Laplacian pyramid. First, it upsamples the source image by injecting even zero rows and columns and then convolves the result with the same kernel as in pyrDown multiplied by 4.

    Declaration

    Objective-C

    + (void)pyrUp:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
           dstsize:(nonnull Size2i *)dstsize
        borderType:(BorderTypes)borderType;

    Swift

    class func pyrUp(src: Mat, dst: Mat, dstsize: Size2i, borderType: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image. It has the specified size and the same type as src .

    dstsize

    size of the output image.

    borderType

    Pixel extrapolation method, see #BorderTypes (only #BORDER_DEFAULT is supported)

  • Upsamples an image and then blurs it.

    By default, size of the output image is computed as Size(src.cols\*2, (src.rows\*2), but in any case, the following conditions should be satisfied:

    \begin{array}{l} | \texttt{dstsize.width} -src.cols*2| \leq ( \texttt{dstsize.width} \mod 2) \ | \texttt{dstsize.height} -src.rows*2| \leq ( \texttt{dstsize.height} \mod 2) \end{array}

    The function performs the upsampling step of the Gaussian pyramid construction, though it can actually be used to construct the Laplacian pyramid. First, it upsamples the source image by injecting even zero rows and columns and then convolves the result with the same kernel as in pyrDown multiplied by 4.

    Declaration

    Objective-C

    + (void)pyrUp:(nonnull Mat *)src
              dst:(nonnull Mat *)dst
          dstsize:(nonnull Size2i *)dstsize;

    Swift

    class func pyrUp(src: Mat, dst: Mat, dstsize: Size2i)

    Parameters

    src

    input image.

    dst

    output image. It has the specified size and the same type as src .

    dstsize

    size of the output image.

  • Upsamples an image and then blurs it.

    By default, size of the output image is computed as Size(src.cols\*2, (src.rows\*2), but in any case, the following conditions should be satisfied:

    \begin{array}{l} | \texttt{dstsize.width} -src.cols*2| \leq ( \texttt{dstsize.width} \mod 2) \ | \texttt{dstsize.height} -src.rows*2| \leq ( \texttt{dstsize.height} \mod 2) \end{array}

    The function performs the upsampling step of the Gaussian pyramid construction, though it can actually be used to construct the Laplacian pyramid. First, it upsamples the source image by injecting even zero rows and columns and then convolves the result with the same kernel as in pyrDown multiplied by 4.

    Declaration

    Objective-C

    + (void)pyrUp:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func pyrUp(src: Mat, dst: Mat)

    Parameters

    src

    input image.

    dst

    output image. It has the specified size and the same type as src .

  • Draws a simple, thick, or filled up-right rectangle.

    The function cv::rectangle draws a rectangle outline or a filled rectangle whose two opposite corners are pt1 and pt2.

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  pt1:(nonnull Point2i *)pt1
                  pt2:(nonnull Point2i *)pt2
                color:(nonnull Scalar *)color
            thickness:(int)thickness
             lineType:(LineTypes)lineType
                shift:(int)shift;

    Swift

    class func rectangle(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, lineType: LineTypes, shift: Int32)

    Parameters

    img

    Image.

    pt1

    Vertex of the rectangle.

    pt2

    Vertex of the rectangle opposite to pt1 .

    color

    Rectangle color or brightness (grayscale image).

    thickness

    Thickness of lines that make up the rectangle. Negative values, like #FILLED, mean that the function has to draw a filled rectangle.

    lineType

    Type of the line. See #LineTypes

    shift

    Number of fractional bits in the point coordinates.

  • Draws a simple, thick, or filled up-right rectangle.

    The function cv::rectangle draws a rectangle outline or a filled rectangle whose two opposite corners are pt1 and pt2.

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  pt1:(nonnull Point2i *)pt1
                  pt2:(nonnull Point2i *)pt2
                color:(nonnull Scalar *)color
            thickness:(int)thickness
             lineType:(LineTypes)lineType;

    Swift

    class func rectangle(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32, lineType: LineTypes)

    Parameters

    img

    Image.

    pt1

    Vertex of the rectangle.

    pt2

    Vertex of the rectangle opposite to pt1 .

    color

    Rectangle color or brightness (grayscale image).

    thickness

    Thickness of lines that make up the rectangle. Negative values, like #FILLED, mean that the function has to draw a filled rectangle.

    lineType

    Type of the line. See #LineTypes

  • Draws a simple, thick, or filled up-right rectangle.

    The function cv::rectangle draws a rectangle outline or a filled rectangle whose two opposite corners are pt1 and pt2.

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  pt1:(nonnull Point2i *)pt1
                  pt2:(nonnull Point2i *)pt2
                color:(nonnull Scalar *)color
            thickness:(int)thickness;

    Swift

    class func rectangle(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar, thickness: Int32)

    Parameters

    img

    Image.

    pt1

    Vertex of the rectangle.

    pt2

    Vertex of the rectangle opposite to pt1 .

    color

    Rectangle color or brightness (grayscale image).

    thickness

    Thickness of lines that make up the rectangle. Negative values, like #FILLED, mean that the function has to draw a filled rectangle.

  • Draws a simple, thick, or filled up-right rectangle.

    The function cv::rectangle draws a rectangle outline or a filled rectangle whose two opposite corners are pt1 and pt2.

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  pt1:(nonnull Point2i *)pt1
                  pt2:(nonnull Point2i *)pt2
                color:(nonnull Scalar *)color;

    Swift

    class func rectangle(img: Mat, pt1: Point2i, pt2: Point2i, color: Scalar)

    Parameters

    img

    Image.

    pt1

    Vertex of the rectangle.

    pt2

    Vertex of the rectangle opposite to pt1 .

    color

    Rectangle color or brightness (grayscale image). mean that the function has to draw a filled rectangle.

  • use rec parameter as alternative specification of the drawn rectangle: r.tl() and r.br()-Point(1,1) are opposite corners

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  rec:(nonnull Rect2i *)rec
                color:(nonnull Scalar *)color
            thickness:(int)thickness
             lineType:(LineTypes)lineType
                shift:(int)shift;

    Swift

    class func rectangle(img: Mat, rec: Rect2i, color: Scalar, thickness: Int32, lineType: LineTypes, shift: Int32)
  • use rec parameter as alternative specification of the drawn rectangle: r.tl() and r.br()-Point(1,1) are opposite corners

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  rec:(nonnull Rect2i *)rec
                color:(nonnull Scalar *)color
            thickness:(int)thickness
             lineType:(LineTypes)lineType;

    Swift

    class func rectangle(img: Mat, rec: Rect2i, color: Scalar, thickness: Int32, lineType: LineTypes)
  • use rec parameter as alternative specification of the drawn rectangle: r.tl() and r.br()-Point(1,1) are opposite corners

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  rec:(nonnull Rect2i *)rec
                color:(nonnull Scalar *)color
            thickness:(int)thickness;

    Swift

    class func rectangle(img: Mat, rec: Rect2i, color: Scalar, thickness: Int32)
  • use rec parameter as alternative specification of the drawn rectangle: r.tl() and r.br()-Point(1,1) are opposite corners

    Declaration

    Objective-C

    + (void)rectangle:(nonnull Mat *)img
                  rec:(nonnull Rect2i *)rec
                color:(nonnull Scalar *)color;

    Swift

    class func rectangle(img: Mat, rec: Rect2i, color: Scalar)
  • Applies a generic geometrical transformation to an image.

    The function remap transforms the source image using the specified map:

    \texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))

    where values of pixels with non-integer coordinates are computed using one of available interpolation methods.

    map_x
    and
    map_y
    can be encoded as separate floating-point maps in
    map_1
    and
    map_2
    respectively, or interleaved floating-point maps of
    (x,y)
    in
    map_1
    , or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (~2x) remapping operations. In the converted case,
    map_1
    contains pairs (cvFloor(x), cvFloor(y)) and
    map_2
    contains indices in a table of interpolation coefficients.

    This function cannot operate in-place.

    Declaration

    Objective-C

    + (void)remap:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
                 map1:(nonnull Mat *)map1
                 map2:(nonnull Mat *)map2
        interpolation:(int)interpolation
           borderMode:(BorderTypes)borderMode
          borderValue:(nonnull Scalar *)borderValue;

    Swift

    class func remap(src: Mat, dst: Mat, map1: Mat, map2: Mat, interpolation: Int32, borderMode: BorderTypes, borderValue: Scalar)

    Parameters

    src

    Source image.

    dst

    Destination image. It has the same size as map1 and the same type as src .

    map1

    The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1, or CV_32FC2. See convertMaps for details on converting a floating point representation to fixed-point for speed.

    map2

    The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively.

    interpolation

    Interpolation method (see #InterpolationFlags). The method #INTER_AREA is not supported by this function.

    borderMode

    Pixel extrapolation method (see #BorderTypes). When borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the “outliers” in the source image are not modified by the function.

    borderValue

    Value used in case of a constant border. By default, it is 0. @note Due to current implementation limitations the size of an input and output images should be less than 32767x32767.

  • Applies a generic geometrical transformation to an image.

    The function remap transforms the source image using the specified map:

    \texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))

    where values of pixels with non-integer coordinates are computed using one of available interpolation methods.

    map_x
    and
    map_y
    can be encoded as separate floating-point maps in
    map_1
    and
    map_2
    respectively, or interleaved floating-point maps of
    (x,y)
    in
    map_1
    , or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (~2x) remapping operations. In the converted case,
    map_1
    contains pairs (cvFloor(x), cvFloor(y)) and
    map_2
    contains indices in a table of interpolation coefficients.

    This function cannot operate in-place.

    Declaration

    Objective-C

    + (void)remap:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
                 map1:(nonnull Mat *)map1
                 map2:(nonnull Mat *)map2
        interpolation:(int)interpolation
           borderMode:(BorderTypes)borderMode;

    Swift

    class func remap(src: Mat, dst: Mat, map1: Mat, map2: Mat, interpolation: Int32, borderMode: BorderTypes)

    Parameters

    src

    Source image.

    dst

    Destination image. It has the same size as map1 and the same type as src .

    map1

    The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1, or CV_32FC2. See convertMaps for details on converting a floating point representation to fixed-point for speed.

    map2

    The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively.

    interpolation

    Interpolation method (see #InterpolationFlags). The method #INTER_AREA is not supported by this function.

    borderMode

    Pixel extrapolation method (see #BorderTypes). When borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the “outliers” in the source image are not modified by the function. @note Due to current implementation limitations the size of an input and output images should be less than 32767x32767.

  • Applies a generic geometrical transformation to an image.

    The function remap transforms the source image using the specified map:

    \texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))

    where values of pixels with non-integer coordinates are computed using one of available interpolation methods.

    map_x
    and
    map_y
    can be encoded as separate floating-point maps in
    map_1
    and
    map_2
    respectively, or interleaved floating-point maps of
    (x,y)
    in
    map_1
    , or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (~2x) remapping operations. In the converted case,
    map_1
    contains pairs (cvFloor(x), cvFloor(y)) and
    map_2
    contains indices in a table of interpolation coefficients.

    This function cannot operate in-place.

    Declaration

    Objective-C

    + (void)remap:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
                 map1:(nonnull Mat *)map1
                 map2:(nonnull Mat *)map2
        interpolation:(int)interpolation;

    Swift

    class func remap(src: Mat, dst: Mat, map1: Mat, map2: Mat, interpolation: Int32)

    Parameters

    src

    Source image.

    dst

    Destination image. It has the same size as map1 and the same type as src .

    map1

    The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1, or CV_32FC2. See convertMaps for details on converting a floating point representation to fixed-point for speed.

    map2

    The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively.

    interpolation

    Interpolation method (see #InterpolationFlags). The method #INTER_AREA is not supported by this function. borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the “outliers” in the source image are not modified by the function. @note Due to current implementation limitations the size of an input and output images should be less than 32767x32767.

  • Resizes an image.

    The function resize resizes the image src down to or up to the specified size. Note that the initial dst type or size are not taken into account. Instead, the size and type are derived from the src,dsize,fx, and fy. If you want to resize src so that it fits the pre-created dst, you may call the function as follows:

     // explicitly specify dsize=dst.size(); fx and fy will be computed from that.
     resize(src, dst, dst.size(), 0, 0, interpolation);
    

    If you want to decimate the image by factor of 2 in each direction, you can call the function this way:

     // specify fx and fy and let the function compute the destination image size.
     resize(src, dst, Size(), 0.5, 0.5, interpolation);
    

    To shrink an image, it will generally look best with #INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with c#INTER_CUBIC (slow) or #INTER_LINEAR (faster but still looks OK).

    Declaration

    Objective-C

    + (void)resize:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
                dsize:(nonnull Size2i *)dsize
                   fx:(double)fx
                   fy:(double)fy
        interpolation:(int)interpolation;

    Swift

    class func resize(src: Mat, dst: Mat, dsize: Size2i, fx: Double, fy: Double, interpolation: Int32)

    Parameters

    src

    input image.

    dst

    output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src.

    dsize

    output image size; if it equals zero, it is computed as:

    \texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}
    Either dsize or both fx and fy must be non-zero.

    fx

    scale factor along the horizontal axis; when it equals 0, it is computed as

    \texttt{(double)dsize.width/src.cols}

    fy

    scale factor along the vertical axis; when it equals 0, it is computed as

    \texttt{(double)dsize.height/src.rows}

    interpolation

    interpolation method, see #InterpolationFlags

  • Resizes an image.

    The function resize resizes the image src down to or up to the specified size. Note that the initial dst type or size are not taken into account. Instead, the size and type are derived from the src,dsize,fx, and fy. If you want to resize src so that it fits the pre-created dst, you may call the function as follows:

     // explicitly specify dsize=dst.size(); fx and fy will be computed from that.
     resize(src, dst, dst.size(), 0, 0, interpolation);
    

    If you want to decimate the image by factor of 2 in each direction, you can call the function this way:

     // specify fx and fy and let the function compute the destination image size.
     resize(src, dst, Size(), 0.5, 0.5, interpolation);
    

    To shrink an image, it will generally look best with #INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with c#INTER_CUBIC (slow) or #INTER_LINEAR (faster but still looks OK).

    Declaration

    Objective-C

    + (void)resize:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
             dsize:(nonnull Size2i *)dsize
                fx:(double)fx
                fy:(double)fy;

    Swift

    class func resize(src: Mat, dst: Mat, dsize: Size2i, fx: Double, fy: Double)

    Parameters

    src

    input image.

    dst

    output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src.

    dsize

    output image size; if it equals zero, it is computed as:

    \texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}
    Either dsize or both fx and fy must be non-zero.

    fx

    scale factor along the horizontal axis; when it equals 0, it is computed as

    \texttt{(double)dsize.width/src.cols}

    fy

    scale factor along the vertical axis; when it equals 0, it is computed as

    \texttt{(double)dsize.height/src.rows}

  • Resizes an image.

    The function resize resizes the image src down to or up to the specified size. Note that the initial dst type or size are not taken into account. Instead, the size and type are derived from the src,dsize,fx, and fy. If you want to resize src so that it fits the pre-created dst, you may call the function as follows:

     // explicitly specify dsize=dst.size(); fx and fy will be computed from that.
     resize(src, dst, dst.size(), 0, 0, interpolation);
    

    If you want to decimate the image by factor of 2 in each direction, you can call the function this way:

     // specify fx and fy and let the function compute the destination image size.
     resize(src, dst, Size(), 0.5, 0.5, interpolation);
    

    To shrink an image, it will generally look best with #INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with c#INTER_CUBIC (slow) or #INTER_LINEAR (faster but still looks OK).

    Declaration

    Objective-C

    + (void)resize:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
             dsize:(nonnull Size2i *)dsize
                fx:(double)fx;

    Swift

    class func resize(src: Mat, dst: Mat, dsize: Size2i, fx: Double)

    Parameters

    src

    input image.

    dst

    output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src.

    dsize

    output image size; if it equals zero, it is computed as:

    \texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}
    Either dsize or both fx and fy must be non-zero.

    fx

    scale factor along the horizontal axis; when it equals 0, it is computed as

    \texttt{(double)dsize.width/src.cols}
    \texttt{(double)dsize.height/src.rows}

  • Resizes an image.

    The function resize resizes the image src down to or up to the specified size. Note that the initial dst type or size are not taken into account. Instead, the size and type are derived from the src,dsize,fx, and fy. If you want to resize src so that it fits the pre-created dst, you may call the function as follows:

     // explicitly specify dsize=dst.size(); fx and fy will be computed from that.
     resize(src, dst, dst.size(), 0, 0, interpolation);
    

    If you want to decimate the image by factor of 2 in each direction, you can call the function this way:

     // specify fx and fy and let the function compute the destination image size.
     resize(src, dst, Size(), 0.5, 0.5, interpolation);
    

    To shrink an image, it will generally look best with #INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with c#INTER_CUBIC (slow) or #INTER_LINEAR (faster but still looks OK).

    Declaration

    Objective-C

    + (void)resize:(nonnull Mat *)src
               dst:(nonnull Mat *)dst
             dsize:(nonnull Size2i *)dsize;

    Swift

    class func resize(src: Mat, dst: Mat, dsize: Size2i)

    Parameters

    src

    input image.

    dst

    output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src.

    dsize

    output image size; if it equals zero, it is computed as:

    \texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}
    Either dsize or both fx and fy must be non-zero.
    \texttt{(double)dsize.width/src.cols}
    \texttt{(double)dsize.height/src.rows}

  • Applies a separable linear filter to an image.

    The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

    Declaration

    Objective-C

    + (void)sepFilter2D:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                 ddepth:(int)ddepth
                kernelX:(nonnull Mat *)kernelX
                kernelY:(nonnull Mat *)kernelY
                 anchor:(nonnull Point2i *)anchor
                  delta:(double)delta
             borderType:(BorderTypes)borderType;

    Swift

    class func sepFilter2D(src: Mat, dst: Mat, ddepth: Int32, kernelX: Mat, kernelY: Mat, anchor: Point2i, delta: Double, borderType: BorderTypes)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Destination image depth, see REF: filter_depths “combinations”

    kernelX

    Coefficients for filtering each row.

    kernelY

    Coefficients for filtering each column.

    anchor

    Anchor position within the kernel. The default value

    (-1,-1)
    means that the anchor is at the kernel center.

    delta

    Value added to the filtered results before storing them.

    borderType

    Pixel extrapolation method, see #BorderTypes. #BORDER_WRAP is not supported.

  • Applies a separable linear filter to an image.

    The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

    Declaration

    Objective-C

    + (void)sepFilter2D:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                 ddepth:(int)ddepth
                kernelX:(nonnull Mat *)kernelX
                kernelY:(nonnull Mat *)kernelY
                 anchor:(nonnull Point2i *)anchor
                  delta:(double)delta;

    Swift

    class func sepFilter2D(src: Mat, dst: Mat, ddepth: Int32, kernelX: Mat, kernelY: Mat, anchor: Point2i, delta: Double)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Destination image depth, see REF: filter_depths “combinations”

    kernelX

    Coefficients for filtering each row.

    kernelY

    Coefficients for filtering each column.

    anchor

    Anchor position within the kernel. The default value

    (-1,-1)
    means that the anchor is at the kernel center.

    delta

    Value added to the filtered results before storing them.

  • Applies a separable linear filter to an image.

    The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

    Declaration

    Objective-C

    + (void)sepFilter2D:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                 ddepth:(int)ddepth
                kernelX:(nonnull Mat *)kernelX
                kernelY:(nonnull Mat *)kernelY
                 anchor:(nonnull Point2i *)anchor;

    Swift

    class func sepFilter2D(src: Mat, dst: Mat, ddepth: Int32, kernelX: Mat, kernelY: Mat, anchor: Point2i)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Destination image depth, see REF: filter_depths “combinations”

    kernelX

    Coefficients for filtering each row.

    kernelY

    Coefficients for filtering each column.

    anchor

    Anchor position within the kernel. The default value

    (-1,-1)
    means that the anchor is at the kernel center.

  • Applies a separable linear filter to an image.

    The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

    Declaration

    Objective-C

    + (void)sepFilter2D:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                 ddepth:(int)ddepth
                kernelX:(nonnull Mat *)kernelX
                kernelY:(nonnull Mat *)kernelY;

    Swift

    class func sepFilter2D(src: Mat, dst: Mat, ddepth: Int32, kernelX: Mat, kernelY: Mat)

    Parameters

    src

    Source image.

    dst

    Destination image of the same size and the same number of channels as src .

    ddepth

    Destination image depth, see REF: filter_depths “combinations”

    kernelX

    Coefficients for filtering each row.

    kernelY

    Coefficients for filtering each column. is at the kernel center.

  • Calculates the first order image derivative in both x and y using a Sobel operator

    Equivalent to calling:

    Sobel( src, dx, CV_16SC1, 1, 0, 3 ); Sobel( src, dy, CV_16SC1, 0, 1, 3 );

    Declaration

    Objective-C

    + (void)spatialGradient:(nonnull Mat *)src
                         dx:(nonnull Mat *)dx
                         dy:(nonnull Mat *)dy
                      ksize:(int)ksize
                 borderType:(BorderTypes)borderType;

    Swift

    class func spatialGradient(src: Mat, dx: Mat, dy: Mat, ksize: Int32, borderType: BorderTypes)

    Parameters

    src

    input image.

    dx

    output image with first-order derivative in x.

    dy

    output image with first-order derivative in y.

    ksize

    size of Sobel kernel. It must be 3.

    borderType

    pixel extrapolation method, see #BorderTypes. Only #BORDER_DEFAULT=#BORDER_REFLECT_101 and #BORDER_REPLICATE are supported.

  • Calculates the first order image derivative in both x and y using a Sobel operator

    Equivalent to calling:

    Sobel( src, dx, CV_16SC1, 1, 0, 3 ); Sobel( src, dy, CV_16SC1, 0, 1, 3 );

    Declaration

    Objective-C

    + (void)spatialGradient:(nonnull Mat *)src
                         dx:(nonnull Mat *)dx
                         dy:(nonnull Mat *)dy
                      ksize:(int)ksize;

    Swift

    class func spatialGradient(src: Mat, dx: Mat, dy: Mat, ksize: Int32)

    Parameters

    src

    input image.

    dx

    output image with first-order derivative in x.

    dy

    output image with first-order derivative in y.

    ksize

    size of Sobel kernel. It must be 3. Only #BORDER_DEFAULT=#BORDER_REFLECT_101 and #BORDER_REPLICATE are supported.

  • Calculates the first order image derivative in both x and y using a Sobel operator

    Equivalent to calling:

    Sobel( src, dx, CV_16SC1, 1, 0, 3 ); Sobel( src, dy, CV_16SC1, 0, 1, 3 );

    Declaration

    Objective-C

    + (void)spatialGradient:(nonnull Mat *)src
                         dx:(nonnull Mat *)dx
                         dy:(nonnull Mat *)dy;

    Swift

    class func spatialGradient(src: Mat, dx: Mat, dy: Mat)

    Parameters

    src

    input image.

    dx

    output image with first-order derivative in x.

    dy

    output image with first-order derivative in y. Only #BORDER_DEFAULT=#BORDER_REFLECT_101 and #BORDER_REPLICATE are supported.

  • Calculates the normalized sum of squares of the pixel values overlapping the filter.

    For every pixel

    (x, y)
    in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel
    (x, y)
    .

    The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

    Declaration

    Objective-C

    + (void)sqrBoxFilter:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                  ddepth:(int)ddepth
                   ksize:(nonnull Size2i *)ksize
                  anchor:(nonnull Point2i *)anchor
               normalize:(BOOL)normalize
              borderType:(BorderTypes)borderType;

    Swift

    class func sqrBoxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i, normalize: Bool, borderType: BorderTypes)

    Parameters

    src

    input image

    dst

    output image of the same size and type as _src

    ddepth

    the output image depth (-1 to use src.depth())

    ksize

    kernel size

    anchor

    kernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center.

    normalize

    flag, specifying whether the kernel is to be normalized by it’s area or not.

    borderType

    border mode used to extrapolate pixels outside of the image, see #BorderTypes. #BORDER_WRAP is not supported.

  • Calculates the normalized sum of squares of the pixel values overlapping the filter.

    For every pixel

    (x, y)
    in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel
    (x, y)
    .

    The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

    Declaration

    Objective-C

    + (void)sqrBoxFilter:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                  ddepth:(int)ddepth
                   ksize:(nonnull Size2i *)ksize
                  anchor:(nonnull Point2i *)anchor
               normalize:(BOOL)normalize;

    Swift

    class func sqrBoxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i, normalize: Bool)

    Parameters

    src

    input image

    dst

    output image of the same size and type as _src

    ddepth

    the output image depth (-1 to use src.depth())

    ksize

    kernel size

    anchor

    kernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center.

    normalize

    flag, specifying whether the kernel is to be normalized by it’s area or not.

  • Calculates the normalized sum of squares of the pixel values overlapping the filter.

    For every pixel

    (x, y)
    in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel
    (x, y)
    .

    The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

    Declaration

    Objective-C

    + (void)sqrBoxFilter:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                  ddepth:(int)ddepth
                   ksize:(nonnull Size2i *)ksize
                  anchor:(nonnull Point2i *)anchor;

    Swift

    class func sqrBoxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i, anchor: Point2i)

    Parameters

    src

    input image

    dst

    output image of the same size and type as _src

    ddepth

    the output image depth (-1 to use src.depth())

    ksize

    kernel size

    anchor

    kernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center.

  • Calculates the normalized sum of squares of the pixel values overlapping the filter.

    For every pixel

    (x, y)
    in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel
    (x, y)
    .

    The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

    Declaration

    Objective-C

    + (void)sqrBoxFilter:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                  ddepth:(int)ddepth
                   ksize:(nonnull Size2i *)ksize;

    Swift

    class func sqrBoxFilter(src: Mat, dst: Mat, ddepth: Int32, ksize: Size2i)

    Parameters

    src

    input image

    dst

    output image of the same size and type as _src

    ddepth

    the output image depth (-1 to use src.depth())

    ksize

    kernel size center.

  • Applies an affine transformation to an image.

    The function warpAffine transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} ( \texttt{M} _{11} x + \texttt{M} _{12} y + \texttt{M} _{13}, \texttt{M} _{21} x + \texttt{M} _{22} y + \texttt{M} _{23})

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with #invertAffineTransform and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpAffine:(nonnull Mat *)src
                   dst:(nonnull Mat *)dst
                     M:(nonnull Mat *)M
                 dsize:(nonnull Size2i *)dsize
                 flags:(int)flags
            borderMode:(BorderTypes)borderMode
           borderValue:(nonnull Scalar *)borderValue;

    Swift

    class func warpAffine(src: Mat, dst: Mat, M: Mat, dsize: Size2i, flags: Int32, borderMode: BorderTypes, borderValue: Scalar)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    2\times 3
    transformation matrix.

    dsize

    size of the output image.

    flags

    combination of interpolation methods (see #InterpolationFlags) and the optional flag #WARP_INVERSE_MAP that means that M is the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ).

    borderMode

    pixel extrapolation method (see #BorderTypes); when borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the “outliers” in the source image are not modified by the function.

    borderValue

    value used in case of a constant border; by default, it is 0.

  • Applies an affine transformation to an image.

    The function warpAffine transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} ( \texttt{M} _{11} x + \texttt{M} _{12} y + \texttt{M} _{13}, \texttt{M} _{21} x + \texttt{M} _{22} y + \texttt{M} _{23})

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with #invertAffineTransform and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpAffine:(nonnull Mat *)src
                   dst:(nonnull Mat *)dst
                     M:(nonnull Mat *)M
                 dsize:(nonnull Size2i *)dsize
                 flags:(int)flags
            borderMode:(BorderTypes)borderMode;

    Swift

    class func warpAffine(src: Mat, dst: Mat, M: Mat, dsize: Size2i, flags: Int32, borderMode: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    2\times 3
    transformation matrix.

    dsize

    size of the output image.

    flags

    combination of interpolation methods (see #InterpolationFlags) and the optional flag #WARP_INVERSE_MAP that means that M is the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ).

    borderMode

    pixel extrapolation method (see #BorderTypes); when borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the “outliers” in the source image are not modified by the function.

  • Applies an affine transformation to an image.

    The function warpAffine transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} ( \texttt{M} _{11} x + \texttt{M} _{12} y + \texttt{M} _{13}, \texttt{M} _{21} x + \texttt{M} _{22} y + \texttt{M} _{23})

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with #invertAffineTransform and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpAffine:(nonnull Mat *)src
                   dst:(nonnull Mat *)dst
                     M:(nonnull Mat *)M
                 dsize:(nonnull Size2i *)dsize
                 flags:(int)flags;

    Swift

    class func warpAffine(src: Mat, dst: Mat, M: Mat, dsize: Size2i, flags: Int32)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    2\times 3
    transformation matrix.

    dsize

    size of the output image.

    flags

    combination of interpolation methods (see #InterpolationFlags) and the optional flag #WARP_INVERSE_MAP that means that M is the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ). borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the “outliers” in the source image are not modified by the function.

  • Applies an affine transformation to an image.

    The function warpAffine transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} ( \texttt{M} _{11} x + \texttt{M} _{12} y + \texttt{M} _{13}, \texttt{M} _{21} x + \texttt{M} _{22} y + \texttt{M} _{23})

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with #invertAffineTransform and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpAffine:(nonnull Mat *)src
                   dst:(nonnull Mat *)dst
                     M:(nonnull Mat *)M
                 dsize:(nonnull Size2i *)dsize;

    Swift

    class func warpAffine(src: Mat, dst: Mat, M: Mat, dsize: Size2i)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    2\times 3
    transformation matrix.

    dsize

    size of the output image. flag #WARP_INVERSE_MAP that means that M is the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ). borderMode=#BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the “outliers” in the source image are not modified by the function.

  • Applies a perspective transformation to an image.

    The function warpPerspective transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} \left ( \frac{M_{11} x + M_{12} y + M_{13}}{M_{31} x + M_{32} y + M_{33}} , \frac{M_{21} x + M_{22} y + M_{23}}{M_{31} x + M_{32} y + M_{33}} \right )

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpPerspective:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                          M:(nonnull Mat *)M
                      dsize:(nonnull Size2i *)dsize
                      flags:(int)flags
                 borderMode:(BorderTypes)borderMode
                borderValue:(nonnull Scalar *)borderValue;

    Swift

    class func warpPerspective(src: Mat, dst: Mat, M: Mat, dsize: Size2i, flags: Int32, borderMode: BorderTypes, borderValue: Scalar)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    3\times 3
    transformation matrix.

    dsize

    size of the output image.

    flags

    combination of interpolation methods (#INTER_LINEAR or #INTER_NEAREST) and the optional flag #WARP_INVERSE_MAP, that sets M as the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ).

    borderMode

    pixel extrapolation method (#BORDER_CONSTANT or #BORDER_REPLICATE).

    borderValue

    value used in case of a constant border; by default, it equals 0.

  • Applies a perspective transformation to an image.

    The function warpPerspective transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} \left ( \frac{M_{11} x + M_{12} y + M_{13}}{M_{31} x + M_{32} y + M_{33}} , \frac{M_{21} x + M_{22} y + M_{23}}{M_{31} x + M_{32} y + M_{33}} \right )

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpPerspective:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                          M:(nonnull Mat *)M
                      dsize:(nonnull Size2i *)dsize
                      flags:(int)flags
                 borderMode:(BorderTypes)borderMode;

    Swift

    class func warpPerspective(src: Mat, dst: Mat, M: Mat, dsize: Size2i, flags: Int32, borderMode: BorderTypes)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    3\times 3
    transformation matrix.

    dsize

    size of the output image.

    flags

    combination of interpolation methods (#INTER_LINEAR or #INTER_NEAREST) and the optional flag #WARP_INVERSE_MAP, that sets M as the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ).

    borderMode

    pixel extrapolation method (#BORDER_CONSTANT or #BORDER_REPLICATE).

  • Applies a perspective transformation to an image.

    The function warpPerspective transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} \left ( \frac{M_{11} x + M_{12} y + M_{13}}{M_{31} x + M_{32} y + M_{33}} , \frac{M_{21} x + M_{22} y + M_{23}}{M_{31} x + M_{32} y + M_{33}} \right )

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpPerspective:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                          M:(nonnull Mat *)M
                      dsize:(nonnull Size2i *)dsize
                      flags:(int)flags;

    Swift

    class func warpPerspective(src: Mat, dst: Mat, M: Mat, dsize: Size2i, flags: Int32)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    3\times 3
    transformation matrix.

    dsize

    size of the output image.

    flags

    combination of interpolation methods (#INTER_LINEAR or #INTER_NEAREST) and the optional flag #WARP_INVERSE_MAP, that sets M as the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ).

  • Applies a perspective transformation to an image.

    The function warpPerspective transforms the source image using the specified matrix:

    \texttt{dst} (x,y) = \texttt{src} \left ( \frac{M_{11} x + M_{12} y + M_{13}}{M_{31} x + M_{32} y + M_{33}} , \frac{M_{21} x + M_{22} y + M_{23}}{M_{31} x + M_{32} y + M_{33}} \right )

    when the flag #WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M. The function cannot operate in-place.

    Declaration

    Objective-C

    + (void)warpPerspective:(nonnull Mat *)src
                        dst:(nonnull Mat *)dst
                          M:(nonnull Mat *)M
                      dsize:(nonnull Size2i *)dsize;

    Swift

    class func warpPerspective(src: Mat, dst: Mat, M: Mat, dsize: Size2i)

    Parameters

    src

    input image.

    dst

    output image that has the size dsize and the same type as src .

    M

    3\times 3
    transformation matrix.

    dsize

    size of the output image. optional flag #WARP_INVERSE_MAP, that sets M as the inverse transformation (

    \texttt{dst}\rightarrow\texttt{src}
    ).

  • Remaps an image to polar or semilog-polar coordinates space

    polar_remaps_reference_image Polar remaps reference

    Transform the source image using the following transformation:

    dst(\rho , \phi ) = src(x,y)

    where

    \begin{array}{l} \vec{I} = (x - center.x, \;y - center.y) \ \phi = Kangle \cdot \texttt{angle} (\vec{I}) \ \rho = \left\{\begin{matrix} Klin \cdot \texttt{magnitude} (\vec{I}) & default \ Klog \cdot log_e(\texttt{magnitude} (\vec{I})) & if \; semilog \ \end{matrix}\right. \end{array}

    and

    \begin{array}{l} Kangle = dsize.height / 2\Pi \ Klin = dsize.width / maxRadius \ Klog = dsize.width / log_e(maxRadius) \ \end{array}

    \par Linear vs semilog mapping

    Polar mapping can be linear or semi-log. Add one of #WarpPolarMode to flags to specify the polar mapping mode.

    Linear is the default mode.

    The semilog mapping emulates the human “foveal” vision that permit very high acuity on the line of sight (central vision) in contrast to peripheral vision where acuity is minor.

    \par Option on dsize:

    • if both values in dsize <=0 (default), the destination image will have (almost) same area of source bounding circle:

      \begin{array}{l} dsize.area \leftarrow (maxRadius^2 \cdot \Pi) \ dsize.width = \texttt{cvRound}(maxRadius) \ dsize.height = \texttt{cvRound}(maxRadius \cdot \Pi) \ \end{array}

    • if only dsize.height <= 0, the destination image area will be proportional to the bounding circle area but scaled by Kx * Kx:

      \begin{array}{l} dsize.height = \texttt{cvRound}(dsize.width \cdot \Pi) \ \end{array}

    • if both values in dsize > 0, the destination image will have the given size therefore the area of the bounding circle will be scaled to dsize.

    \par Reverse mapping

    You can get reverse mapping adding #WARP_INVERSE_MAP to flags \snippet polar_transforms.cpp InverseMap

    In addiction, to calculate the original coordinate from a polar mapped coordinate

    (rho, phi)->(x, y)
    : \snippet polar_transforms.cpp InverseCoordinate

    • The function can not operate in-place.
    • To calculate magnitude and angle in degrees #cartToPolar is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.
    • This function uses #remap. Due to current implementation limitations the size of an input and output images should be less than 32767x32767.

    See

    cv::remap

    Declaration

    Objective-C

    + (void)warpPolar:(nonnull Mat *)src
                  dst:(nonnull Mat *)dst
                dsize:(nonnull Size2i *)dsize
               center:(nonnull Point2f *)center
            maxRadius:(double)maxRadius
                flags:(int)flags;

    Swift

    class func warpPolar(src: Mat, dst: Mat, dsize: Size2i, center: Point2f, maxRadius: Double, flags: Int32)
  • Performs a marker-based image segmentation using the watershed algorithm.

    The function implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in CITE: Meyer92 .

    Before passing the image to the function, you have to roughly outline the desired regions in the image markers with positive (>0) indices. So, every region is represented as one or more connected components with the pixel values 1, 2, 3, and so on. Such markers can be retrieved from a binary mask using #findContours and #drawContours (see the watershed.cpp demo). The markers are “seeds” of the future image regions. All the other pixels in markers , whose relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0’s. In the function output, each pixel in markers is set to a value of the “seed” components or to -1 at boundaries between the regions.

    Note

    Any two neighbor connected components are not necessarily separated by a watershed boundary (-1’s pixels); for example, they can touch each other in the initial marker image passed to the function.

    imgproc_misc

    Declaration

    Objective-C

    + (void)watershed:(nonnull Mat *)image markers:(nonnull Mat *)markers;

    Swift

    class func watershed(image: Mat, markers: Mat)

    Parameters

    image

    Input 8-bit 3-channel image.

    markers

    Input/output 32-bit single-channel image (map) of markers. It should have the same size as image .