Classes

The following classes are available globally.

  • Utility class to wrap a std::vector<char>

    See more

    Declaration

    Objective-C

    @interface ByteVector : NSObject
    extension ByteVector : Sequence

    Swift

    class ByteVector : NSObject
  • Utility functions for handling CvType values

    See more

    Declaration

    Objective-C

    @interface CvType : NSObject

    Swift

    class CvType : NSObject
  • Utility class to wrap a std::vector<double>

    See more

    Declaration

    Objective-C

    @interface DoubleVector : NSObject
    extension DoubleVector : Sequence

    Swift

    class DoubleVector : NSObject
  • Utility class to wrap a std::vector<float>

    See more

    Declaration

    Objective-C

    @interface FloatVector : NSObject
    extension FloatVector : Sequence

    Swift

    class FloatVector : NSObject
  • Utility class to wrap a std::vector<int>

    See more

    Declaration

    Objective-C

    @interface IntVector : NSObject
    extension IntVector : Sequence

    Swift

    class IntVector : NSObject
  • Mat

    The class Mat represents an n-dimensional dense numerical single-channel or multi-channel array.

    See more

    Declaration

    Objective-C

    @interface Mat : NSObject

    Swift

    class Mat : NSObject
  • Class implementing the AKAZE keypoint detector and descriptor extractor, described in CITE: ANB13.

    AKAZE descriptors can only be used with KAZE or AKAZE keypoints. This class is thread-safe.

    Note

    When you need descriptors use Feature2D::detectAndCompute, which provides better performance. When using Feature2D::detect followed by Feature2D::compute scale space pyramid is computed twice.

    Note

    AKAZE implements T-API. When image is passed as UMat some parts of the algorithm will use OpenCL.

    Note

    [ANB13] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface AKAZE : Feature2D

    Swift

    class AKAZE : Feature2D
  • Artificial Neural Networks - Multi-Layer Perceptrons.

    Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.

    Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.

    See

    REF: ml_intro_ann

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface ANN_MLP : StatModel

    Swift

    class ANN_MLP : StatModel
  • Wrapping class for feature detection using the AGAST method. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface AgastFeatureDetector : Feature2D

    Swift

    class AgastFeatureDetector : Feature2D
  • This is a base class for all more or less complex algorithms in OpenCV

    especially for classes of algorithms, for which there can be multiple implementations. The examples are stereo correspondence (for which there are algorithms like block matching, semi-global block matching, graph-cut etc.), background subtraction (which can be done using mixture-of-gaussians models, codebook-based algorithm etc.), optical flow (block matching, Lucas-Kanade, Horn-Schunck etc.).

    Here is example of SimpleBlobDetector use in your application via Algorithm interface: SNIPPET: snippets/core_various.cpp Algorithm

    Member of Core

    See more

    Declaration

    Objective-C

    @interface Algorithm : NSObject

    Swift

    class Algorithm : NSObject
  • The base class for algorithms that align images of the same scene with different exposures

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface AlignExposures : Algorithm

    Swift

    class AlignExposures : Algorithm
  • This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations.

    It is invariant to exposure, so exposure values and camera response are not necessary.

    In this implementation new image regions are filled with zeros.

    For more information see CITE: GW03 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface AlignMTB : AlignExposures

    Swift

    class AlignMTB : AlignExposures
  • Brute-force descriptor matcher.

    For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BFMatcher : DescriptorMatcher

    Swift

    class BFMatcher : DescriptorMatcher
  • Class to compute an image descriptor using the bag of visual words.

    Such a computation consists of the following steps:

    1. Compute descriptors for a given image and its keypoints set.
    2. Find the nearest visual words from the vocabulary for each keypoint descriptor.
    3. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The i-th bin of the histogram is a frequency of i-th word of the vocabulary in the given image.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BOWImgDescriptorExtractor : NSObject

    Swift

    class BOWImgDescriptorExtractor : NSObject
  • kmeans -based class to train visual vocabulary using the bag of visual words approach. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BOWKMeansTrainer : BOWTrainer

    Swift

    class BOWKMeansTrainer : BOWTrainer
  • Abstract base class for training the bag of visual words vocabulary from a set of descriptors.

    For details, see, for example, Visual Categorization with Bags of Keypoints by Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cedric Bray, 2004. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BOWTrainer : NSObject

    Swift

    class BOWTrainer : NSObject
  • Class implementing the BRISK keypoint detector and descriptor extractor, described in CITE: LCS11 .

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BRISK : Feature2D

    Swift

    class BRISK : Feature2D
  • Base class for background/foreground segmentation. :

    The class is only used to define the common interface for the whole family of background/foreground segmentation algorithms.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractor : Algorithm

    Swift

    class BackgroundSubtractor : Algorithm
  • K-nearest neighbours - based Background/Foreground Segmentation Algorithm.

    The class implements the K-nearest neighbours background subtraction described in CITE: Zivkovic2006 . Very efficient if number of foreground pixels is low.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorKNN : BackgroundSubtractor

    Swift

    class BackgroundSubtractorKNN : BackgroundSubtractor
  • Gaussian Mixture-based Background/Foreground Segmentation Algorithm.

    The class implements the Gaussian mixture model background subtraction described in CITE: Zivkovic2004 and CITE: Zivkovic2006 .

    Member of Video

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorMOG2 : BackgroundSubtractor

    Swift

    class BackgroundSubtractorMOG2 : BackgroundSubtractor
  • The BaseCascadeClassifier module

    Member of Objdetect

    Declaration

    Objective-C

    @interface BaseCascadeClassifier : Algorithm

    Swift

    class BaseCascadeClassifier : Algorithm
  • Boosted tree classifier derived from DTrees

    See

    REF: ml_intro_boost

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface Boost : DTrees

    Swift

    class Boost : DTrees
  • Base class for Contrast Limited Adaptive Histogram Equalization.

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface CLAHE : Algorithm

    Swift

    class CLAHE : Algorithm
  • Declaration

    Objective-C

    @interface Calib3d : NSObject

    Swift

    class Calib3d : NSObject
  • The base class for camera response calibration algorithms.

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface CalibrateCRF : Algorithm

    Swift

    class CalibrateCRF : Algorithm
  • Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. Objective function is constructed using pixel values on the same position in all images, extra term is added to make the result smoother.

    For more information see CITE: DM97 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface CalibrateDebevec : CalibrateCRF

    Swift

    class CalibrateDebevec : CalibrateCRF
  • Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. This algorithm uses all image pixels.

    For more information see CITE: RB99 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface CalibrateRobertson : CalibrateCRF

    Swift

    class CalibrateRobertson : CalibrateCRF
  • Cascade classifier class for object detection.

    Member of Objdetect

    See more

    Declaration

    Objective-C

    @interface CascadeClassifier : NSObject

    Swift

    class CascadeClassifier : NSObject
  • The CirclesGridFinderParameters module

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface CirclesGridFinderParameters : NSObject

    Swift

    class CirclesGridFinderParameters : NSObject
  • This class represents high-level API for classification models.

    ClassificationModel allows to set params for preprocessing input image. ClassificationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and return top-1 prediction.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface ClassificationModel : Model

    Swift

    class ClassificationModel : Model
  • Declaration

    Objective-C

    @interface Converters : NSObject
    
    + (Mat*)vector_Point_to_Mat:(NSArray<Point2i*>*)pts NS_SWIFT_NAME(vector_Point_to_Mat(_:));
    
    + (NSArray<Point2i*>*)Mat_to_vector_Point:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point(_:));
    
    + (Mat*)vector_Point2f_to_Mat:(NSArray<Point2f*>*)pts NS_SWIFT_NAME(vector_Point2f_to_Mat(_:));
    
    + (NSArray<Point2f*>*)Mat_to_vector_Point2f:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point2f(_:));
    
    + (Mat*)vector_Point2d_to_Mat:(NSArray<Point2d*>*)pts NS_SWIFT_NAME(vector_Point2d_to_Mat(_:));
    
    + (NSArray<Point2f*>*)Mat_to_vector_Point2d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point2d(_:));
    
    + (Mat*)vector_Point3i_to_Mat:(NSArray<Point3i*>*)pts NS_SWIFT_NAME(vector_Point3i_to_Mat(_:));
    
    + (NSArray<Point3i*>*)Mat_to_vector_Point3i:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3i(_:));
    
    + (Mat*)vector_Point3f_to_Mat:(NSArray<Point3f*>*)pts NS_SWIFT_NAME(vector_Point3f_to_Mat(_:));
    
    + (NSArray<Point3f*>*)Mat_to_vector_Point3f:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3f(_:));
    
    + (Mat*)vector_Point3d_to_Mat:(NSArray<Point3d*>*)pts NS_SWIFT_NAME(vector_Point3d_to_Mat(_:));
    
    + (NSArray<Point3d*>*)Mat_to_vector_Point3d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3d(_:));
    
    + (Mat*)vector_float_to_Mat:(NSArray<NSNumber*>*)fs NS_SWIFT_NAME(vector_float_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_float:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_float(_:));
    
    + (Mat*)vector_uchar_to_Mat:(NSArray<NSNumber*>*)us NS_SWIFT_NAME(vector_uchar_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_uchar:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_uchar(_:));
    
    + (Mat*)vector_char_to_Mat:(NSArray<NSNumber*>*)cs NS_SWIFT_NAME(vector_char_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_char:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_char(_:));
    
    + (Mat*)vector_int_to_Mat:(NSArray<NSNumber*>*)is NS_SWIFT_NAME(vector_int_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_int:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_int(_:));
    
    + (Mat*)vector_Rect_to_Mat:(NSArray<Rect2i*>*)rs NS_SWIFT_NAME(vector_Rect_to_Mat(_:));
    
    + (NSArray<Rect2i*>*)Mat_to_vector_Rect:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Rect(_:));
    
    + (Mat*)vector_Rect2d_to_Mat:(NSArray<Rect2d*>*)rs NS_SWIFT_NAME(vector_Rect2d_to_Mat(_:));
    
    + (NSArray<Rect2d*>*)Mat_to_vector_Rect2d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Rect2d(_:));
    
    + (Mat*)vector_KeyPoint_to_Mat:(NSArray<KeyPoint*>*)kps NS_SWIFT_NAME(vector_KeyPoint_to_Mat(_:));
    
    + (NSArray<KeyPoint*>*)Mat_to_vector_KeyPoint:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_KeyPoint(_:));
    
    + (Mat*)vector_double_to_Mat:(NSArray<NSNumber*>*)ds NS_SWIFT_NAME(vector_double_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_double:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_double(_:));
    
    + (Mat*)vector_DMatch_to_Mat:(NSArray<DMatch*>*)matches NS_SWIFT_NAME(vector_DMatch_to_Mat(_:));
    
    + (NSArray<DMatch*>*)Mat_to_vector_DMatch:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_DMatch(_:));
    
    + (Mat*)vector_RotatedRect_to_Mat:(NSArray<RotatedRect*>*)rs NS_SWIFT_NAME(vector_RotatedRect_to_Mat(_:));
    
    + (NSArray<RotatedRect*>*)Mat_to_vector_RotatedRect:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_RotatedRect(_:));
    
    @end

    Swift

    class Converters : NSObject
  • Declaration

    Objective-C

    @interface Core : NSObject

    Swift

    class Core : NSObject
  • Declaration

    Objective-C

    @interface CvAbstractCamera2 : NSObject
    
    @property UIDeviceOrientation currentDeviceOrientation;
    @property BOOL cameraAvailable;
    @property (nonatomic, strong) AVCaptureSession* captureSession;
    @property (nonatomic, strong) AVCaptureConnection* videoCaptureConnection;
    
    @property (nonatomic, readonly) BOOL running;
    @property (nonatomic, readonly) BOOL captureSessionLoaded;
    
    @property (nonatomic, assign) int defaultFPS;
    @property (nonatomic, readonly) AVCaptureVideoPreviewLayer *captureVideoPreviewLayer;
    @property (nonatomic, assign) AVCaptureDevicePosition defaultAVCaptureDevicePosition;
    @property (nonatomic, assign) AVCaptureVideoOrientation defaultAVCaptureVideoOrientation;
    @property (nonatomic, assign) BOOL useAVCaptureVideoPreviewLayer;
    @property (nonatomic, strong) NSString *const defaultAVCaptureSessionPreset;
    @property (nonatomic, assign) int imageWidth;
    @property (nonatomic, assign) int imageHeight;
    @property (nonatomic, strong) UIView* parentView;
    
    - (void)start;
    - (void)stop;
    - (void)switchCameras;
    - (id)initWithParentView:(UIView*)parent;
    - (void)createCaptureOutput;
    - (void)createVideoPreviewLayer;
    - (void)updateOrientation;
    - (void)lockFocus;
    - (void)unlockFocus;
    - (void)lockExposure;
    - (void)unlockExposure;
    - (void)lockBalance;
    - (void)unlockBalance;
    @end

    Swift

    class CvAbstractCamera2 : NSObject
  • ////////////////////////////// CvVideoCamera ///////////////////////////////////////////

    See more

    Declaration

    Objective-C

    @class CvVideoCamera2;

    Swift

    class CvVideoCamera2 : CvAbstractCamera2, AVCaptureVideoDataOutputSampleBufferDelegate
  • ////////////////////////////// CvPhotoCamera ///////////////////////////////////////////

    See more

    Declaration

    Objective-C

    @class CvPhotoCamera2;

    Swift

    class CvPhotoCamera2 : CvAbstractCamera2, AVCapturePhotoCaptureDelegate
  • DIS optical flow algorithm.

    This class implements the Dense Inverse Search (DIS) optical flow algorithm. More details about the algorithm can be found at CITE: Kroeger2016 . Includes three presets with preselected parameters to provide reasonable trade-off between speed and quality. However, even the slowest preset is still relatively fast, use DeepFlow if you need better quality and don’t care about speed.

    This implementation includes several additional features compared to the algorithm described in the paper, including spatial propagation of flow vectors (REF: getUseSpatialPropagation), as well as an option to utilize an initial flow approximation passed to REF: calc (which is, essentially, temporal propagation, if the previous frame’s flow field is passed).

    Member of Video

    See more

    Declaration

    Objective-C

    @interface DISOpticalFlow : DenseOpticalFlow

    Swift

    class DISOpticalFlow : DenseOpticalFlow
  • Structure for matching: query descriptor index, train descriptor index, train image index and distance between descriptors.

    See more

    Declaration

    Objective-C

    @interface DMatch : NSObject

    Swift

    class DMatch : NSObject
  • The class represents a single decision tree or a collection of decision trees.

    The current public interface of the class allows user to train only a single decision tree, however the class is capable of storing multiple decision trees and using them for prediction (by summing responses or using a voting schemes), and the derived from DTrees classes (such as RTrees and Boost) use this capability to implement decision tree ensembles.

    See

    REF: ml_intro_trees

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface DTrees : StatModel

    Swift

    class DTrees : StatModel
  • Base class for dense optical flow algorithms

    Member of Video

    See more

    Declaration

    Objective-C

    @interface DenseOpticalFlow : Algorithm

    Swift

    class DenseOpticalFlow : Algorithm
  • Abstract base class for matching keypoint descriptors.

    It has two groups of match methods: for matching descriptors of an image with another image or with an image set.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface DescriptorMatcher : Algorithm

    Swift

    class DescriptorMatcher : Algorithm
  • This class represents high-level API for object detection networks.

    DetectionModel allows to set params for preprocessing input image. DetectionModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and return result detections. For DetectionModel SSD, Faster R-CNN, YOLO topologies are supported.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface DetectionModel : Model

    Swift

    class DetectionModel : Model
  • This struct stores the scalar value (or array) of one of the following type: double, cv::String or int64. TODO: Maybe int64 is useless because double type exactly stores at least 2^52 integers.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface DictValue : NSObject

    Swift

    class DictValue : NSObject
  • Dnn

    Declaration

    Objective-C

    @interface Dnn : NSObject

    Swift

    class Dnn : NSObject
  • Simple wrapper for a vector of two double

    See more

    Declaration

    Objective-C

    @interface Double2 : NSObject

    Swift

    class Double2 : NSObject
  • Simple wrapper for a vector of three double

    See more

    Declaration

    Objective-C

    @interface Double3 : NSObject

    Swift

    class Double3 : NSObject
  • EM

    The class implements the Expectation Maximization algorithm.

    See

    REF: ml_intro_em

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface EM : StatModel

    Swift

    class EM : StatModel
  • Class computing a dense optical flow using the Gunnar Farneback’s algorithm.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface FarnebackOpticalFlow : DenseOpticalFlow

    Swift

    class FarnebackOpticalFlow : DenseOpticalFlow
  • Wrapping class for feature detection using the FAST method. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface FastFeatureDetector : Feature2D

    Swift

    class FastFeatureDetector : Feature2D
  • Abstract base class for 2D image feature detectors and descriptor extractors

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface Feature2D : Algorithm

    Swift

    class Feature2D : Algorithm
  • Flann-based descriptor matcher.

    This matcher trains cv::flann::Index on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. FlannBasedMatcher does not support masking permissible matches of descriptor sets because flann::Index does not support this. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface FlannBasedMatcher : DescriptorMatcher

    Swift

    class FlannBasedMatcher : DescriptorMatcher
  • Simple wrapper for a vector of four float

    See more

    Declaration

    Objective-C

    @interface Float4 : NSObject

    Swift

    class Float4 : NSObject
  • Simple wrapper for a vector of six float

    See more

    Declaration

    Objective-C

    @interface Float6 : NSObject

    Swift

    class Float6 : NSObject
  • Wrapping class for feature detection using the goodFeaturesToTrack function. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface GFTTDetector : Feature2D

    Swift

    class GFTTDetector : Feature2D
  • finds arbitrary template in the grayscale image using Generalized Hough Transform

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface GeneralizedHough : Algorithm

    Swift

    class GeneralizedHough : Algorithm
  • finds arbitrary template in the grayscale image using Generalized Hough Transform

    Detects position only without translation and rotation CITE: Ballard1981 .

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface GeneralizedHoughBallard : GeneralizedHough

    Swift

    class GeneralizedHoughBallard : GeneralizedHough
  • finds arbitrary template in the grayscale image using Generalized Hough Transform

    Detects position, translation and rotation CITE: Guil1999 .

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface GeneralizedHoughGuil : GeneralizedHough

    Swift

    class GeneralizedHoughGuil : GeneralizedHough
  • Declaration

    Objective-C

    @interface HOGDescriptor : NSObject

    Swift

    class HOGDescriptor : NSObject
  • The Imgcodecs module

    Member of Imgcodecs Member enums: ImreadModes, ImwriteFlags, ImwriteEXRTypeFlags, ImwritePNGFlags, ImwritePAMFlags

    See more

    Declaration

    Objective-C

    @interface Imgcodecs : NSObject

    Swift

    class Imgcodecs : NSObject
  • Simple wrapper for a vector of four int

    See more

    Declaration

    Objective-C

    @interface Int4 : NSObject

    Swift

    class Int4 : NSObject
  • Class implementing the KAZE keypoint detector and descriptor extractor, described in CITE: ABD12 .

    Note

    AKAZE descriptor can only be used with KAZE or AKAZE keypoints .. [ABD12] KAZE Features. Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. In European Conference on Computer Vision (ECCV), Fiorenze, Italy, October 2012.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface KAZE : Feature2D

    Swift

    class KAZE : Feature2D
  • The class implements K-Nearest Neighbors model

    See

    REF: ml_intro_knn

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface KNearest : StatModel

    Swift

    class KNearest : StatModel
  • Kalman filter class.

    The class implements a standard Kalman filter http://en.wikipedia.org/wiki/Kalman_filter, CITE: Welch95 . However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality.

    Note

    In C API when CvKalman* kalmanFilter structure is not needed anymore, it should be released with cvReleaseKalman(&kalmanFilter)

    Member of Video

    See more

    Declaration

    Objective-C

    @interface KalmanFilter : NSObject

    Swift

    class KalmanFilter : NSObject
  • Object representing a point feature found by one of many available keypoint detectors, such as Harris corner detector, FAST, StarDetector, SURF, SIFT etc.

    See more

    Declaration

    Objective-C

    @interface KeyPoint : NSObject

    Swift

    class KeyPoint : NSObject
  • This class represents high-level API for keypoints models

    KeypointsModel allows to set params for preprocessing input image. KeypointsModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the x and y coordinates of each detected keypoint

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface KeypointsModel : Model

    Swift

    class KeypointsModel : Model
  • This interface class allows to build new Layers - are building blocks of networks.

    Each class, derived from Layer, must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of REF: dnnLayerFactory “LayerFactory” macros.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface Layer : Algorithm

    Swift

    class Layer : Algorithm
  • Line segment detector class

    following the algorithm described at CITE: Rafael12 .

    Note

    Implementation has been removed due original code license conflict

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface LineSegmentDetector : Algorithm

    Swift

    class LineSegmentDetector : Algorithm
  • Implements Logistic Regression classifier.

    See

    REF: ml_intro_lr

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface LogisticRegression : StatModel

    Swift

    class LogisticRegression : StatModel
  • Maximally stable extremal region extractor

    The class encapsulates all the parameters of the %MSER extraction algorithm (see wiki article).

    • there are two different implementation of %MSER: one for grey image, one for color image

    • the grey image algorithm is taken from: CITE: nister2008linear ; the paper claims to be faster than union-find method; it actually get 1.5~2m/s on my centrino L7200 1.2GHz laptop.

    • the color image algorithm is taken from: CITE: forssen2007maximally ; it should be much slower than grey image method ( 3~4 times ); the chi_table.h file is taken directly from paper’s source code which is distributed under GPL.

    • (Python) A complete example showing the use of the %MSER detector can be found at samples/python/mser.py

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface MSER : Feature2D

    Swift

    class MSER : Feature2D
  • Mat representation of an array of bytes

    See more

    Declaration

    Objective-C

    @interface MatOfByte : Mat

    Swift

    class MatOfByte : Mat
  • Mat representation of an array of DMatch objects

    See more

    Declaration

    Objective-C

    @interface MatOfDMatch : Mat

    Swift

    class MatOfDMatch : Mat
  • Mat representation of an array of doubles

    See more

    Declaration

    Objective-C

    @interface MatOfDouble : Mat

    Swift

    class MatOfDouble : Mat
  • Mat representation of an array of floats

    See more

    Declaration

    Objective-C

    @interface MatOfFloat : Mat

    Swift

    class MatOfFloat : Mat
  • Mat representation of an array of vectors of four floats

    See more

    Declaration

    Objective-C

    @interface MatOfFloat4 : Mat

    Swift

    class MatOfFloat4 : Mat
  • Mat representation of an array of vectors of six floats

    See more

    Declaration

    Objective-C

    @interface MatOfFloat6 : Mat

    Swift

    class MatOfFloat6 : Mat
  • Mat representation of an array of ints

    See more

    Declaration

    Objective-C

    @interface MatOfInt : Mat

    Swift

    class MatOfInt : Mat
  • Mat representation of an array of vectors of four ints

    See more

    Declaration

    Objective-C

    @interface MatOfInt4 : Mat

    Swift

    class MatOfInt4 : Mat
  • Mat representation of an array of KeyPoint objects

    See more

    Declaration

    Objective-C

    @interface MatOfKeyPoint : Mat

    Swift

    class MatOfKeyPoint : Mat
  • Mat representation of an array of Point2f objects

    See more

    Declaration

    Objective-C

    @interface MatOfPoint2f : Mat

    Swift

    class MatOfPoint2f : Mat
  • Mat representation of an array of Point objects

    See more

    Declaration

    Objective-C

    
    @interface MatOfPoint2i : Mat

    Swift

    class MatOfPoint : Mat
  • Mat representation of an array of Point3i objects

    See more

    Declaration

    Objective-C

    @interface MatOfPoint3 : Mat

    Swift

    class MatOfPoint3 : Mat
  • Mat representation of an array of Point3f objects

    See more

    Declaration

    Objective-C

    @interface MatOfPoint3f : Mat

    Swift

    class MatOfPoint3f : Mat
  • Mat representation of an array of Rect2d objects

    See more

    Declaration

    Objective-C

    @interface MatOfRect2d : Mat

    Swift

    class MatOfRect2d : Mat
  • Mat representation of an array of Rect objects

    See more

    Declaration

    Objective-C

    
    @interface MatOfRect2i : Mat

    Swift

    class MatOfRect : Mat
  • Mat representation of an array of RotatedRect objects

    See more

    Declaration

    Objective-C

    @interface MatOfRotatedRect : Mat

    Swift

    class MatOfRotatedRect : Mat
  • The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response.

    For more information see CITE: DM97 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeDebevec : MergeExposures

    Swift

    class MergeDebevec : MergeExposures
  • The base class algorithms that can merge exposure sequence to a single image.

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeExposures : Algorithm

    Swift

    class MergeExposures : Algorithm
  • Pixels are weighted using contrast, saturation and well-exposedness measures, than images are combined using laplacian pyramids.

    The resulting image weight is constructed as weighted average of contrast, saturation and well-exposedness measures.

    The resulting image doesn’t require tonemapping and can be converted to 8-bit image by multiplying by 255, but it’s recommended to apply gamma correction and/or linear tonemapping.

    For more information see CITE: MK07 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeMertens : MergeExposures

    Swift

    class MergeMertens : MergeExposures
  • The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response.

    For more information see CITE: RB99 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeRobertson : MergeExposures

    Swift

    class MergeRobertson : MergeExposures
  • Result of operation to determine global minimum and maximum of an array

    See more

    Declaration

    Objective-C

    @interface MinMaxLocResult : NSObject

    Swift

    class MinMaxLocResult : NSObject
  • Ml
  • This class is presented high-level API for neural networks.

    Model allows to set params for preprocessing input image. Model creates net from file with trained weights and config, sets preprocessing input and runs forward pass.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface Model : Net

    Swift

    class Model : Net
  • Declaration

    Objective-C

    @interface Moments : NSObject
    
    @property double m00;
    @property double m10;
    @property double m01;
    @property double m20;
    @property double m11;
    @property double m02;
    @property double m30;
    @property double m21;
    @property double m12;
    @property double m03;
    
    @property double mu20;
    @property double mu11;
    @property double mu02;
    @property double mu30;
    @property double mu21;
    @property double mu12;
    @property double mu03;
    
    @property double nu20;
    @property double nu11;
    @property double nu02;
    @property double nu30;
    @property double nu21;
    @property double nu12;
    @property double nu03;
    
    #ifdef __cplusplus
    @property(readonly) cv::Moments& nativeRef;
    #endif
    
    -(instancetype)initWithM00:(double)m00 m10:(double)m10 m01:(double)m01 m20:(double)m20 m11:(double)m11 m02:(double)m02 m30:(double)m30 m21:(double)m21 m12:(double)m12 m03:(double)m03;
    
    -(instancetype)init;
    
    -(instancetype)initWithVals:(NSArray<NSNumber*>*)vals;
    
    #ifdef __cplusplus
    +(instancetype)fromNative:(cv::Moments&)moments;
    #endif
    
    -(void)set:(NSArray<NSNumber*>*)vals;
    -(void)completeState;
    -(NSString *)description;
    
    @end

    Swift

    class Moments : NSObject
  • Net

    This class allows to create and manipulate comprehensive artificial neural networks.

    Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.

    Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.

    This class supports reference counting of its instances, i. e. copies point to the same instance.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface Net : NSObject

    Swift

    class Net : NSObject
  • Bayes classifier for normally distributed data.

    See

    REF: ml_intro_bayes

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface NormalBayesClassifier : StatModel

    Swift

    class NormalBayesClassifier : StatModel
  • ORB

    Class implementing the ORB (oriented BRIEF) keypoint detector and descriptor extractor

    described in CITE: RRKB11 . The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface ORB : Feature2D

    Swift

    class ORB : Feature2D
  • Declaration

    Objective-C

    @interface Objdetect : NSObject

    Swift

    class Objdetect : NSObject
  • The structure represents the logarithmic grid range of statmodel parameters.

    It is used for optimizing statmodel accuracy by varying model parameters, the accuracy estimate being computed by cross-validation.

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface ParamGrid : NSObject

    Swift

    class ParamGrid : NSObject
  • The Params module

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface Params : NSObject

    Swift

    class Params : NSObject
  • Declaration

    Objective-C

    @interface Photo : NSObject

    Swift

    class Photo : NSObject
  • Represents a two dimensional point the coordinate values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Point2d : NSObject

    Swift

    class Point2d : NSObject
  • Represents a two dimensional point the coordinate values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Point2f : NSObject

    Swift

    class Point2f : NSObject
  • Represents a two dimensional point the coordinate values of which are of type int

    See more

    Declaration

    Objective-C

    
    @interface Point2i : NSObject

    Swift

    class Point : NSObject
  • Represents a three dimensional point the coordinate values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Point3d : NSObject

    Swift

    class Point3d : NSObject
  • Represents a three dimensional point the coordinate values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Point3f : NSObject

    Swift

    class Point3f : NSObject
  • Represents a three dimensional point the coordinate values of which are of type int

    See more

    Declaration

    Objective-C

    @interface Point3i : NSObject

    Swift

    class Point3i : NSObject
  • Groups the object candidate rectangles. rectList Input/output vector of rectangles. Output vector includes retained and grouped rectangles. (The Python list is not modified in place.) weights Input/output vector of weights of rectangles. Output vector includes weights of retained and grouped rectangles. (The Python list is not modified in place.) groupThreshold Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. eps Relative difference between sides of the rectangles to merge them into a group.

    Member of Objdetect

    See more

    Declaration

    Objective-C

    @interface QRCodeDetector : NSObject

    Swift

    class QRCodeDetector : NSObject
  • The class implements the random forest predictor.

    See

    REF: ml_intro_rtrees

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface RTrees : DTrees

    Swift

    class RTrees : DTrees
  • Represents a range of dimension indices

    See more

    Declaration

    Objective-C

    @interface Range : NSObject

    Swift

    class Range : NSObject
  • Represents a rectange the coordinate and dimension values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Rect2d : NSObject

    Swift

    class Rect2d : NSObject
  • Represents a rectange the coordinate and dimension values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Rect2f : NSObject

    Swift

    class Rect2f : NSObject
  • Represents a rectange the coordinate and dimension values of which are of type int

    See more

    Declaration

    Objective-C

    
    @interface Rect2i : NSObject

    Swift

    class Rect : NSObject
  • Represents a rotated rectangle on a plane

    See more

    Declaration

    Objective-C

    @interface RotatedRect : NSObject

    Swift

    class RotatedRect : NSObject
  • Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm by D. Lowe CITE: Lowe04 .

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface SIFT : Feature2D

    Swift

    class SIFT : Feature2D
  • SVM

    Support Vector Machines.

    See

    REF: ml_intro_svm

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface SVM : StatModel

    Swift

    class SVM : StatModel
  • **********************************************************************************\ Stochastic Gradient Descent SVM Classifier * ************************************************************************************

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface SVMSGD : StatModel

    Swift

    class SVMSGD : StatModel
  • Represents a four element vector

    See more

    Declaration

    Objective-C

    @interface Scalar : NSObject

    Swift

    class Scalar : NSObject
  • This class represents high-level API for segmentation models

    SegmentationModel allows to set params for preprocessing input image. SegmentationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the class prediction for each pixel.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface SegmentationModel : Model

    Swift

    class SegmentationModel : Model
  • Class for extracting blobs from an image. :

    The class implements a simple algorithm for extracting blobs from an image:

    1. Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds.
    2. Extract connected components from every binary image by findContours and calculate their centers.
    3. Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter.
    4. From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.

    This class performs several filtrations of returned blobs. You should set filterBy* to true/false to turn on/off corresponding filtration. Available filtrations:

    • By color. This filter compares the intensity of a binary image at the center of a blob to blobColor. If they differ, the blob is filtered out. Use blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.
    • By area. Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
    • By circularity. Extracted blobs have circularity (
      \frac{4*\pi*Area}{perimeter * perimeter}
      ) between minCircularity (inclusive) and maxCircularity (exclusive).
    • By ratio of the minimum inertia to maximum inertia. Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).
    • By convexity. Extracted blobs have convexity (area / area of blob convex hull) between minConvexity (inclusive) and maxConvexity (exclusive).

    Default values of parameters are tuned to extract dark circular blobs.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface SimpleBlobDetector : Feature2D

    Swift

    class SimpleBlobDetector : Feature2D
  • Represents the dimensions of a rectangle the values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Size2d : NSObject

    Swift

    class Size2d : NSObject
  • Represents the dimensions of a rectangle the values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Size2f : NSObject

    Swift

    class Size2f : NSObject
  • Represents the dimensions of a rectangle the values of which are of type int

    See more

    Declaration

    Objective-C

    
    @interface Size2i : NSObject

    Swift

    class Size : NSObject
  • Base interface for sparse optical flow algorithms.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface SparseOpticalFlow : Algorithm

    Swift

    class SparseOpticalFlow : Algorithm
  • Class used for calculating a sparse optical flow.

    The class can calculate an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    See

    calcOpticalFlowPyrLK

    Member of Video

    See more

    Declaration

    Objective-C

    @interface SparsePyrLKOpticalFlow : SparseOpticalFlow

    Swift

    class SparsePyrLKOpticalFlow : SparseOpticalFlow
  • Base class for statistical models in OpenCV ML.

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface StatModel : Algorithm

    Swift

    class StatModel : Algorithm
  • Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige.

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface StereoBM : StereoMatcher

    Swift

    class StereoBM : StereoMatcher
  • The base class for stereo correspondence algorithms.

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface StereoMatcher : Algorithm

    Swift

    class StereoMatcher : Algorithm
  • The class implements the modified H. Hirschmuller algorithm CITE: HH08 that differs from the original one as follows:

    • By default, the algorithm is single-pass, which means that you consider only 5 directions instead of 8. Set mode=StereoSGBM::MODE_HH in createStereoSGBM to run the full variant of the algorithm but beware that it may consume a lot of memory.
    • The algorithm matches blocks, not individual pixels. Though, setting blockSize=1 reduces the blocks to single pixels.
    • Mutual information cost function is not implemented. Instead, a simpler Birchfield-Tomasi sub-pixel metric from CITE: BT98 is used. Though, the color images are supported as well.
    • Some pre- and post- processing steps from K. Konolige algorithm StereoBM are included, for example: pre-filtering (StereoBM::PREFILTER_XSOBEL type) and post-filtering (uniqueness check, quadratic interpolation and speckle filtering).

    @note - (Python) An example illustrating the use of the StereoSGBM matching algorithm can be found at opencv_source_code/samples/python/stereo_match.py

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface StereoSGBM : StereoMatcher

    Swift

    class StereoSGBM : StereoMatcher
  • The Subdiv2D module

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface Subdiv2D : NSObject

    Swift

    class Subdiv2D : NSObject
  • Class representing termination criteria for iterative algorithms.

    See more

    Declaration

    Objective-C

    @interface TermCriteria : NSObject

    Swift

    class TermCriteria : NSObject
  • a Class to measure passing time.

    The class computes passing time by counting the number of ticks per second. That is, the following code computes the execution time in seconds: SNIPPET: snippets/core_various.cpp TickMeter_total

    It is also possible to compute the average time over multiple runs: SNIPPET: snippets/core_various.cpp TickMeter_average

    See

    getTickCount, getTickFrequency

    Member of Core

    See more

    Declaration

    Objective-C

    @interface TickMeter : NSObject

    Swift

    class TickMeter : NSObject
  • Base class for tonemapping algorithms - tools that are used to map HDR image to 8-bit range.

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface Tonemap : Algorithm

    Swift

    class Tonemap : Algorithm
  • Adaptive logarithmic mapping is a fast global tonemapping algorithm that scales the image in logarithmic domain.

    Since it’s a global operator the same function is applied to all the pixels, it is controlled by the bias parameter.

    Optional saturation enhancement is possible as described in CITE: FL02 .

    For more information see CITE: DM03 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface TonemapDrago : Tonemap

    Swift

    class TonemapDrago : Tonemap
  • This algorithm transforms image to contrast using gradients on all levels of gaussian pyramid, transforms contrast values to HVS response and scales the response. After this the image is reconstructed from new contrast values.

    For more information see CITE: MM06 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface TonemapMantiuk : Tonemap

    Swift

    class TonemapMantiuk : Tonemap
  • This is a global tonemapping operator that models human visual system.

    Mapping function is controlled by adaptation parameter, that is computed using light adaptation and color adaptation.

    For more information see CITE: RD05 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface TonemapReinhard : Tonemap

    Swift

    class TonemapReinhard : Tonemap
  • Class encapsulating training data.

    Please note that the class only specifies the interface of training data, but not implementation. All the statistical model classes in ml module accepts Ptr<TrainData> as parameter. In other words, you can create your own class derived from TrainData and pass smart pointer to the instance of this class into StatModel::train.

    See

    REF: ml_intro_data

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface TrainData : NSObject

    Swift

    class TrainData : NSObject
  • Variational optical flow refinement

    This class implements variational refinement of the input flow field, i.e. it uses input flow to initialize the minimization of the following functional:

    E(U) = \int_{\Omega} \delta \Psi(E_I) + \gamma \Psi(E_G) + \alpha \Psi(E_S)
    , where
    E_I,E_G,E_S
    are color constancy, gradient constancy and smoothness terms respectively.
    \Psi(s^2)=\sqrt{s^2+\epsilon^2}
    is a robust penalizer to limit the influence of outliers. A complete formulation and a description of the minimization procedure can be found in CITE: Brox2004

    Member of Video

    See more

    Declaration

    Objective-C

    @interface VariationalRefinement : DenseOpticalFlow

    Swift

    class VariationalRefinement : DenseOpticalFlow
  • Declaration

    Objective-C

    @interface Video : NSObject

    Swift

    class Video : NSObject
  • Class for video capturing from video files, image sequences or cameras.

    The class provides C++ API for capturing video from cameras or for reading video files and image sequences.

    Here is how the class can be used: INCLUDE: samples/cpp/videocapture_basic.cpp

    Note

    In REF: videoio_c “C API” the black-box structure CvCapture is used instead of %VideoCapture. @note
    • (C++) A basic sample on using the %VideoCapture interface can be found at OPENCV_SOURCE_CODE/samples/cpp/videocapture_starter.cpp
    • (Python) A basic sample on using the %VideoCapture interface can be found at OPENCV_SOURCE_CODE/samples/python/video.py
    • (Python) A multi threaded video processing sample can be found at OPENCV_SOURCE_CODE/samples/python/video_threaded.py
    • (Python) %VideoCapture sample showcasing some features of the Video4Linux2 backend OPENCV_SOURCE_CODE/samples/python/video_v4l2.py

    Member of Videoio

    See more

    Declaration

    Objective-C

    @interface VideoCapture : NSObject

    Swift

    class VideoCapture : NSObject
  • Video writer class.

    The class provides C++ API for writing video files or image sequences.

    Member of Videoio

    See more

    Declaration

    Objective-C

    @interface VideoWriter : NSObject

    Swift

    class VideoWriter : NSObject
  • Declaration

    Objective-C

    @interface Videoio : NSObject

    Swift

    class Videoio : NSObject