Classes

The following classes are available globally.

  • Class implementing the AKAZE keypoint detector and descriptor extractor, described in CITE: ANB13.

    AKAZE descriptors can only be used with KAZE or AKAZE keypoints. This class is thread-safe.

    Note

    When you need descriptors use Feature2D::detectAndCompute, which provides better performance. When using Feature2D::detect followed by Feature2D::compute scale space pyramid is computed twice.

    Note

    AKAZE implements T-API. When image is passed as UMat some parts of the algorithm will use OpenCL.

    Note

    [ANB13] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface AKAZE : Feature2D

    Swift

    class AKAZE : Feature2D
  • Artificial Neural Networks - Multi-Layer Perceptrons.

    Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.

    Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.

    See

    REF: ml_intro_ann

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface ANN_MLP : StatModel

    Swift

    class ANN_MLP : StatModel
  • Interface for Adaptive Manifold Filter realizations.

    For more details about this filter see CITE: Gastal12 and References_.

    Below listed optional parameters which may be set up with Algorithm::set function.

    • member double sigma_s = 16.0 Spatial standard deviation.
    • member double sigma_r = 0.2 Color space standard deviation.
    • member int tree_height = -1 Height of the manifold tree (default = -1 : automatically computed).
    • member int num_pca_iterations = 1 Number of iterations to computed the eigenvector.
    • member bool adjust_outliers = false Specify adjust outliers using Eq. 9 or not.
    • member bool use_RNG = true Specify use random number generator to compute eigenvector or not.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface AdaptiveManifoldFilter : Algorithm

    Swift

    class AdaptiveManifoldFilter : Algorithm
  • Wrapping class for feature detection using the AGAST method. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface AgastFeatureDetector : Feature2D

    Swift

    class AgastFeatureDetector : Feature2D
  • This is a base class for all more or less complex algorithms in OpenCV

    especially for classes of algorithms, for which there can be multiple implementations. The examples are stereo correspondence (for which there are algorithms like block matching, semi-global block matching, graph-cut etc.), background subtraction (which can be done using mixture-of-gaussians models, codebook-based algorithm etc.), optical flow (block matching, Lucas-Kanade, Horn-Schunck etc.).

    Here is example of SimpleBlobDetector use in your application via Algorithm interface: SNIPPET: snippets/core_various.cpp Algorithm

    Member of Core

    See more

    Declaration

    Objective-C

    @interface Algorithm : NSObject

    Swift

    class Algorithm : NSObject
  • The base class for algorithms that align images of the same scene with different exposures

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface AlignExposures : Algorithm

    Swift

    class AlignExposures : Algorithm
  • This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations.

    It is invariant to exposure, so exposure values and camera response are not necessary.

    In this implementation new image regions are filled with zeros.

    For more information see CITE: GW03 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface AlignMTB : AlignExposures

    Swift

    class AlignMTB : AlignExposures
  • Declaration

    Objective-C

    @interface Aruco : NSObject

    Swift

    class Aruco : NSObject
  • Computes average hash value of the input image

    This is a fast image hashing algorithm, but only work on simple case. For more details, please refer to CITE: lookslikeit

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface AverageHash : ImgHashBase

    Swift

    class AverageHash : ImgHashBase
  • Brute-force descriptor matcher.

    For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BFMatcher : DescriptorMatcher

    Swift

    class BFMatcher : DescriptorMatcher
  • BIF

    Implementation of bio-inspired features (BIF) from the paper: Guo, Guodong, et al. “Human age estimation using bio-inspired features.” Computer Vision and Pattern Recognition, 2009. CVPR 2009.

    Member of Face

    See more

    Declaration

    Objective-C

    @interface BIF : Algorithm

    Swift

    class BIF : Algorithm
  • Class to compute an image descriptor using the bag of visual words.

    Such a computation consists of the following steps:

    1. Compute descriptors for a given image and its keypoints set.
    2. Find the nearest visual words from the vocabulary for each keypoint descriptor.
    3. Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The i-th bin of the histogram is a frequency of i-th word of the vocabulary in the given image.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BOWImgDescriptorExtractor : NSObject

    Swift

    class BOWImgDescriptorExtractor : NSObject
  • kmeans -based class to train visual vocabulary using the bag of visual words approach. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BOWKMeansTrainer : BOWTrainer

    Swift

    class BOWKMeansTrainer : BOWTrainer
  • Abstract base class for training the bag of visual words vocabulary from a set of descriptors.

    For details, see, for example, Visual Categorization with Bags of Keypoints by Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cedric Bray, 2004. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BOWTrainer : NSObject

    Swift

    class BOWTrainer : NSObject
  • Class implementing the BRISK keypoint detector and descriptor extractor, described in CITE: LCS11 .

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface BRISK : Feature2D

    Swift

    class BRISK : Feature2D
  • Base class for background/foreground segmentation. :

    The class is only used to define the common interface for the whole family of background/foreground segmentation algorithms.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractor : Algorithm

    Swift

    class BackgroundSubtractor : Algorithm
  • Background subtraction based on counting.

    About as fast as MOG2 on a high end system. More than twice faster than MOG2 on cheap hardware (benchmarked on Raspberry Pi3).

    %Algorithm by Sagi Zeevi ( https://github.com/sagi-z/BackgroundSubtractorCNT )

    Member of Bgsegm

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorCNT : BackgroundSubtractor

    Swift

    class BackgroundSubtractorCNT : BackgroundSubtractor
  • Background Subtractor module based on the algorithm given in CITE: Gold2012 .

    Takes a series of images and returns a sequence of mask (8UC1) images of the same size, where 255 indicates Foreground and 0 represents Background. This class implements an algorithm described in “Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation,” A. Godbehere, A. Matsukawa, K. Goldberg, American Control Conference, Montreal, June 2012.

    Member of Bgsegm

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorGMG : BackgroundSubtractor

    Swift

    class BackgroundSubtractorGMG : BackgroundSubtractor
  • Implementation of the different yet better algorithm which is called GSOC, as it was implemented during GSOC and was not originated from any paper.

    This algorithm demonstrates better performance on CDNET 2014 dataset compared to other algorithms in OpenCV.

    Member of Bgsegm

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorGSOC : BackgroundSubtractor

    Swift

    class BackgroundSubtractorGSOC : BackgroundSubtractor
  • K-nearest neighbours - based Background/Foreground Segmentation Algorithm.

    The class implements the K-nearest neighbours background subtraction described in CITE: Zivkovic2006 . Very efficient if number of foreground pixels is low.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorKNN : BackgroundSubtractor

    Swift

    class BackgroundSubtractorKNN : BackgroundSubtractor
  • Background Subtraction using Local SVD Binary Pattern. More details about the algorithm can be found at CITE: LGuo2016

    Member of Bgsegm

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorLSBP : BackgroundSubtractor

    Swift

    class BackgroundSubtractorLSBP : BackgroundSubtractor
  • This is for calculation of the LSBP descriptors.

    Member of Bgsegm

    Declaration

    Objective-C

    @interface BackgroundSubtractorLSBPDesc : NSObject

    Swift

    class BackgroundSubtractorLSBPDesc : NSObject
  • Gaussian Mixture-based Background/Foreground Segmentation Algorithm.

    The class implements the algorithm described in CITE: KB2001 .

    Member of Bgsegm

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorMOG : BackgroundSubtractor

    Swift

    class BackgroundSubtractorMOG : BackgroundSubtractor
  • Gaussian Mixture-based Background/Foreground Segmentation Algorithm.

    The class implements the Gaussian mixture model background subtraction described in CITE: Zivkovic2004 and CITE: Zivkovic2006 .

    Member of Video

    See more

    Declaration

    Objective-C

    @interface BackgroundSubtractorMOG2 : BackgroundSubtractor

    Swift

    class BackgroundSubtractorMOG2 : BackgroundSubtractor
  • The BaseCascadeClassifier module

    Member of Objdetect

    Declaration

    Objective-C

    @interface BaseCascadeClassifier : Algorithm

    Swift

    class BaseCascadeClassifier : Algorithm
  • The BaseOCR module

    Member of Text

    Declaration

    Objective-C

    @interface BaseOCR : NSObject

    Swift

    class BaseOCR : NSObject
  • The BasicFaceRecognizer module

    Member of Face

    See more

    Declaration

    Objective-C

    @interface BasicFaceRecognizer : FaceRecognizer

    Swift

    class BasicFaceRecognizer : FaceRecognizer
  • Declaration

    Objective-C

    @interface Bgsegm : NSObject

    Swift

    class Bgsegm : NSObject
  • The Bioinspired module

    Member classes: TransientAreasSegmentationModule, Retina, RetinaFastToneMapping

    See more

    Declaration

    Objective-C

    @interface Bioinspired : NSObject

    Swift

    class Bioinspired : NSObject
  • Image hash based on block mean.

    See CITE: zauner2010implementation for details.

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface BlockMeanHash : ImgHashBase

    Swift

    class BlockMeanHash : ImgHashBase
  • Board of markers

    A board is a set of markers in the 3D space with a common coordinate system. The common form of a board of marker is a planar (2D) board, however any 3D layout can be used. A Board object is composed by:

    • The object points of the marker corners, i.e. their coordinates respect to the board system.
    • The dictionary which indicates the type of markers of the board
    • The identifier of all the markers in the board.

    Member of Aruco

    See more

    Declaration

    Objective-C

    @interface Board : NSObject

    Swift

    class Board : NSObject
  • Boosted tree classifier derived from DTrees

    See

    REF: ml_intro_boost

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface Boost : DTrees

    Swift

    class Boost : DTrees
  • Class implementing BoostDesc (Learning Image Descriptors with Boosting), described in CITE: Trzcinski13a and CITE: Trzcinski13b.

    desc type of descriptor to use, BoostDesc::BINBOOST_256 is default (256 bit long dimension) Available types are: BoostDesc::BGM, BoostDesc::BGM_HARD, BoostDesc::BGM_BILINEAR, BoostDesc::LBGM, BoostDesc::BINBOOST_64, BoostDesc::BINBOOST_128, BoostDesc::BINBOOST_256 use_orientation sample patterns using keypoints orientation, enabled by default scale_factor adjust the sampling window of detected keypoints 6.25f is default and fits for KAZE, SURF detected keypoints window ratio 6.75f should be the scale for SIFT detected keypoints window ratio 5.00f should be the scale for AKAZE, MSD, AGAST, FAST, BRISK keypoints window ratio 0.75f should be the scale for ORB keypoints ratio 1.50f was the default in original implementation

    Note

    BGM is the base descriptor where each binary dimension is computed as the output of a single weak learner. BGM_HARD and BGM_BILINEAR refers to same BGM but use different type of gradient binning. In the BGM_HARD that use ASSIGN_HARD binning type the gradient is assigned to the nearest orientation bin. In the BGM_BILINEAR that use ASSIGN_BILINEAR binning type the gradient is assigned to the two neighbouring bins. In the BGM and all other modes that use ASSIGN_SOFT binning type the gradient is assigned to 8 nearest bins according to the cosine value between the gradient angle and the bin center. LBGM (alias FP-Boost) is the floating point extension where each dimension is computed as a linear combination of the weak learner responses. BINBOOST and subvariants are the binary extensions of LBGM where each bit is computed as a thresholded linear combination of a set of weak learners. BoostDesc header files (boostdesc_*.i) was exported from original binaries with export-boostdesc.py script from samples subfolder.

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface BoostDesc : Feature2D

    Swift

    class BoostDesc : Feature2D
  • Class for computing BRIEF descriptors described in CITE: calon2010 .

    bytes legth of the descriptor in bytes, valid values are: 16, 32 (default) or 64 . use_orientation sample patterns using keypoints orientation, disabled by default.

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface BriefDescriptorExtractor : Feature2D

    Swift

    class BriefDescriptorExtractor : Feature2D
  • Utility class to wrap a std::vector<char>

    See more

    Declaration

    Objective-C

    @interface ByteVector : NSObject

    Swift

    class ByteVector : NSObject
  • Base class for Contrast Limited Adaptive Histogram Equalization.

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface CLAHE : Algorithm

    Swift

    class CLAHE : Algorithm
  • Declaration

    Objective-C

    @interface Calib3d : NSObject

    Swift

    class Calib3d : NSObject
  • The base class for camera response calibration algorithms.

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface CalibrateCRF : Algorithm

    Swift

    class CalibrateCRF : Algorithm
  • Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. Objective function is constructed using pixel values on the same position in all images, extra term is added to make the result smoother.

    For more information see CITE: DM97 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface CalibrateDebevec : CalibrateCRF

    Swift

    class CalibrateDebevec : CalibrateCRF
  • Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. This algorithm uses all image pixels.

    For more information see CITE: RB99 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface CalibrateRobertson : CalibrateCRF

    Swift

    class CalibrateRobertson : CalibrateCRF
  • Cascade classifier class for object detection.

    Member of Objdetect

    See more

    Declaration

    Objective-C

    @interface CascadeClassifier : NSObject

    Swift

    class CascadeClassifier : NSObject
  • ChArUco board Specific class for ChArUco boards. A ChArUco board is a planar board where the markers are placed inside the white squares of a chessboard. The benefits of ChArUco boards is that they provide both, ArUco markers versatility and chessboard corner precision, which is important for calibration and pose estimation. This class also allows the easy creation and drawing of ChArUco boards.

    Member of Aruco

    See more

    Declaration

    Objective-C

    @interface CharucoBoard : Board

    Swift

    class CharucoBoard : Board
  • The CirclesGridFinderParameters module

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface CirclesGridFinderParameters : NSObject

    Swift

    class CirclesGridFinderParameters : NSObject
  • This class represents high-level API for classification models.

    ClassificationModel allows to set params for preprocessing input image. ClassificationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and return top-1 prediction.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface ClassificationModel : Model

    Swift

    class ClassificationModel : Model
  • Image hash based on color moments.

    See CITE: tang2012perceptual for details.

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface ColorMomentHash : ImgHashBase

    Swift

    class ColorMomentHash : ImgHashBase
  • Class for ContourFitting algorithms. ContourFitting match two contours

    z_a
    and
    z_b
    minimizing distance
    d(z_a,z_b)=\sum (a_n - s b_n e^{j(n \alpha +\phi )})^2
    where
    a_n
    and
    b_n
    are Fourier descriptors of
    z_a
    and
    z_b
    and s is a scaling factor and
    \phi
    is angle rotation and
    \alpha
    is starting point factor adjustement

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface ContourFitting : Algorithm

    Swift

    class ContourFitting : Algorithm
  • Declaration

    Objective-C

    @interface Converters : NSObject
    
    + (Mat*)vector_Point_to_Mat:(NSArray<Point2i*>*)pts NS_SWIFT_NAME(vector_Point_to_Mat(_:));
    
    + (NSArray<Point2i*>*)Mat_to_vector_Point:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point(_:));
    
    + (Mat*)vector_Point2f_to_Mat:(NSArray<Point2f*>*)pts NS_SWIFT_NAME(vector_Point2f_to_Mat(_:));
    
    + (NSArray<Point2f*>*)Mat_to_vector_Point2f:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point2f(_:));
    
    + (Mat*)vector_Point2d_to_Mat:(NSArray<Point2d*>*)pts NS_SWIFT_NAME(vector_Point2d_to_Mat(_:));
    
    + (NSArray<Point2f*>*)Mat_to_vector_Point2d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point2d(_:));
    
    + (Mat*)vector_Point3i_to_Mat:(NSArray<Point3i*>*)pts NS_SWIFT_NAME(vector_Point3i_to_Mat(_:));
    
    + (NSArray<Point3i*>*)Mat_to_vector_Point3i:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3i(_:));
    
    + (Mat*)vector_Point3f_to_Mat:(NSArray<Point3f*>*)pts NS_SWIFT_NAME(vector_Point3f_to_Mat(_:));
    
    + (NSArray<Point3f*>*)Mat_to_vector_Point3f:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3f(_:));
    
    + (Mat*)vector_Point3d_to_Mat:(NSArray<Point3d*>*)pts NS_SWIFT_NAME(vector_Point3d_to_Mat(_:));
    
    + (NSArray<Point3d*>*)Mat_to_vector_Point3d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3d(_:));
    
    + (Mat*)vector_float_to_Mat:(NSArray<NSNumber*>*)fs NS_SWIFT_NAME(vector_float_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_float:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_float(_:));
    
    + (Mat*)vector_uchar_to_Mat:(NSArray<NSNumber*>*)us NS_SWIFT_NAME(vector_uchar_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_uchar:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_uchar(_:));
    
    + (Mat*)vector_char_to_Mat:(NSArray<NSNumber*>*)cs NS_SWIFT_NAME(vector_char_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_char:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_char(_:));
    
    + (Mat*)vector_int_to_Mat:(NSArray<NSNumber*>*)is NS_SWIFT_NAME(vector_int_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_int:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_int(_:));
    
    + (Mat*)vector_Rect_to_Mat:(NSArray<Rect2i*>*)rs NS_SWIFT_NAME(vector_Rect_to_Mat(_:));
    
    + (NSArray<Rect2i*>*)Mat_to_vector_Rect:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Rect(_:));
    
    + (Mat*)vector_Rect2d_to_Mat:(NSArray<Rect2d*>*)rs NS_SWIFT_NAME(vector_Rect2d_to_Mat(_:));
    
    + (NSArray<Rect2d*>*)Mat_to_vector_Rect2d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Rect2d(_:));
    
    + (Mat*)vector_KeyPoint_to_Mat:(NSArray<KeyPoint*>*)kps NS_SWIFT_NAME(vector_KeyPoint_to_Mat(_:));
    
    + (NSArray<KeyPoint*>*)Mat_to_vector_KeyPoint:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_KeyPoint(_:));
    
    + (Mat*)vector_double_to_Mat:(NSArray<NSNumber*>*)ds NS_SWIFT_NAME(vector_double_to_Mat(_:));
    
    + (NSArray<NSNumber*>*)Mat_to_vector_double:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_double(_:));
    
    + (Mat*)vector_DMatch_to_Mat:(NSArray<DMatch*>*)matches NS_SWIFT_NAME(vector_DMatch_to_Mat(_:));
    
    + (NSArray<DMatch*>*)Mat_to_vector_DMatch:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_DMatch(_:));
    
    + (Mat*)vector_RotatedRect_to_Mat:(NSArray<RotatedRect*>*)rs NS_SWIFT_NAME(vector_RotatedRect_to_Mat(_:));
    
    + (NSArray<RotatedRect*>*)Mat_to_vector_RotatedRect:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_RotatedRect(_:));
    
    @end

    Swift

    class Converters : NSObject
  • Declaration

    Objective-C

    @interface Core : NSObject

    Swift

    class Core : NSObject
  • Utility functions for handling CvType values

    See more

    Declaration

    Objective-C

    @interface CvType : NSObject

    Swift

    class CvType : NSObject
  • Class implementing DAISY descriptor, described in CITE: Tola10

    radius radius of the descriptor at the initial scale q_radius amount of radial range division quantity q_theta amount of angular range division quantity q_hist amount of gradient orientations range division quantity norm choose descriptors normalization type, where DAISY::NRM_NONE will not do any normalization (default), DAISY::NRM_PARTIAL mean that histograms are normalized independently for L2 norm equal to 1.0, DAISY::NRM_FULL mean that descriptors are normalized for L2 norm equal to 1.0, DAISY::NRM_SIFT mean that descriptors are normalized for L2 norm equal to 1.0 but no individual one is bigger than 0.154 as in SIFT H optional 3x3 homography matrix used to warp the grid of daisy but sampling keypoints remains unwarped on image interpolation switch to disable interpolation for speed improvement at minor quality loss use_orientation sample patterns using keypoints orientation, disabled by default.

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface DAISY : Feature2D

    Swift

    class DAISY : Feature2D
  • DIS optical flow algorithm.

    This class implements the Dense Inverse Search (DIS) optical flow algorithm. More details about the algorithm can be found at CITE: Kroeger2016 . Includes three presets with preselected parameters to provide reasonable trade-off between speed and quality. However, even the slowest preset is still relatively fast, use DeepFlow if you need better quality and don’t care about speed.

    This implementation includes several additional features compared to the algorithm described in the paper, including spatial propagation of flow vectors (REF: getUseSpatialPropagation), as well as an option to utilize an initial flow approximation passed to REF: calc (which is, essentially, temporal propagation, if the previous frame’s flow field is passed).

    Member of Video

    See more

    Declaration

    Objective-C

    @interface DISOpticalFlow : DenseOpticalFlow

    Swift

    class DISOpticalFlow : DenseOpticalFlow
  • Structure for matching: query descriptor index, train descriptor index, train image index and distance between descriptors.

    See more

    Declaration

    Objective-C

    @interface DMatch : NSObject

    Swift

    class DMatch : NSObject
  • Interface for realizations of Domain Transform filter.

    For more details about this filter see CITE: Gastal11 .

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface DTFilter : Algorithm

    Swift

    class DTFilter : Algorithm
  • The class represents a single decision tree or a collection of decision trees.

    The current public interface of the class allows user to train only a single decision tree, however the class is capable of storing multiple decision trees and using them for prediction (by summing responses or using a voting schemes), and the derived from DTrees classes (such as RTrees and Boost) use this capability to implement decision tree ensembles.

    See

    REF: ml_intro_trees

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface DTrees : StatModel

    Swift

    class DTrees : StatModel
  • Base class for dense optical flow algorithms

    Member of Video

    See more

    Declaration

    Objective-C

    @interface DenseOpticalFlow : Algorithm

    Swift

    class DenseOpticalFlow : Algorithm
  • Abstract base class for matching keypoint descriptors.

    It has two groups of match methods: for matching descriptors of an image with another image or with an image set.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface DescriptorMatcher : Algorithm

    Swift

    class DescriptorMatcher : Algorithm
  • This class represents high-level API for object detection networks.

    DetectionModel allows to set params for preprocessing input image. DetectionModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and return result detections. For DetectionModel SSD, Faster R-CNN, YOLO topologies are supported.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface DetectionModel : Model

    Swift

    class DetectionModel : Model
  • Parameters for the detectMarker process:

    • adaptiveThreshWinSizeMin: minimum window size for adaptive thresholding before finding contours (default 3).
    • adaptiveThreshWinSizeMax: maximum window size for adaptive thresholding before finding contours (default 23).
    • adaptiveThreshWinSizeStep: increments from adaptiveThreshWinSizeMin to adaptiveThreshWinSizeMax during the thresholding (default 10).
    • adaptiveThreshConstant: constant for adaptive thresholding before finding contours (default 7)
    • minMarkerPerimeterRate: determine minimum perimeter for marker contour to be detected. This is defined as a rate respect to the maximum dimension of the input image (default 0.03).
    • maxMarkerPerimeterRate: determine maximum perimeter for marker contour to be detected. This is defined as a rate respect to the maximum dimension of the input image (default 4.0).
    • polygonalApproxAccuracyRate: minimum accuracy during the polygonal approximation process to determine which contours are squares. (default 0.03)
    • minCornerDistanceRate: minimum distance between corners for detected markers relative to its perimeter (default 0.05)
    • minDistanceToBorder: minimum distance of any corner to the image border for detected markers (in pixels) (default 3)
    • minMarkerDistanceRate: minimum mean distance beetween two marker corners to be considered similar, so that the smaller one is removed. The rate is relative to the smaller perimeter of the two markers (default 0.05).
    • cornerRefinementMethod: corner refinement method. (CORNER_REFINE_NONE, no refinement. CORNER_REFINE_SUBPIX, do subpixel refinement. CORNER_REFINE_CONTOUR use contour-Points, CORNER_REFINE_APRILTAG use the AprilTag2 approach). (default CORNER_REFINE_NONE)
    • cornerRefinementWinSize: window size for the corner refinement process (in pixels) (default 5).
    • cornerRefinementMaxIterations: maximum number of iterations for stop criteria of the corner refinement process (default 30).
    • cornerRefinementMinAccuracy: minimum error for the stop cristeria of the corner refinement process (default: 0.1)
    • markerBorderBits: number of bits of the marker border, i.e. marker border width (default 1).
    • perspectiveRemovePixelPerCell: number of bits (per dimension) for each cell of the marker when removing the perspective (default 4).
    • perspectiveRemoveIgnoredMarginPerCell: width of the margin of pixels on each cell not considered for the determination of the cell bit. Represents the rate respect to the total size of the cell, i.e. perspectiveRemovePixelPerCell (default 0.13)
    • maxErroneousBitsInBorderRate: maximum number of accepted erroneous bits in the border (i.e. number of allowed white bits in the border). Represented as a rate respect to the total number of bits per marker (default 0.35).
    • minOtsuStdDev: minimun standard deviation in pixels values during the decodification step to apply Otsu thresholding (otherwise, all the bits are set to 0 or 1 depending on mean higher than 128 or not) (default 5.0)
    • errorCorrectionRate error correction rate respect to the maximun error correction capability for each dictionary. (default 0.6).
    • aprilTagMinClusterPixels: reject quads containing too few pixels. (default 5)
    • aprilTagMaxNmaxima: how many corner candidates to consider when segmenting a group of pixels into a quad. (default 10)
    • aprilTagCriticalRad: Reject quads where pairs of edges have angles that are close to straight or close to 180 degrees. Zero means that no quads are rejected. (In radians) (default 10*PI/180)
    • aprilTagMaxLineFitMse: When fitting lines to the contours, what is the maximum mean squared error allowed? This is useful in rejecting contours that are far from being quad shaped; rejecting these quads “early” saves expensive decoding processing. (default 10.0)
    • aprilTagMinWhiteBlackDiff: When we build our model of black & white pixels, we add an extra check that the white model must be (overall) brighter than the black model. How much brighter? (in pixel values, [0,255]). (default 5)
    • aprilTagDeglitch: should the thresholded image be deglitched? Only useful for very noisy images. (default 0)
    • aprilTagQuadDecimate: Detection of quads can be done on a lower-resolution image, improving speed at a cost of pose accuracy and a slight decrease in detection rate. Decoding the binary payload is still done at full resolution. (default 0.0)
    • aprilTagQuadSigma: What Gaussian blur should be applied to the segmented image (used for quad detection?) Parameter is the standard deviation in pixels. Very noisy images benefit from non-zero values (e.g. 0.8). (default 0.0)
    • detectInvertedMarker: to check if there is a white marker. In order to generate a “white” marker just invert a normal marker by using a tilde, ~markerImage. (default false)

    Member of Aruco

    See more

    Declaration

    Objective-C

    @interface DetectorParameters : NSObject

    Swift

    class DetectorParameters : NSObject
  • This struct stores the scalar value (or array) of one of the following type: double, cv::String or int64. TODO: Maybe int64 is useless because double type exactly stores at least 2^52 integers.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface DictValue : NSObject

    Swift

    class DictValue : NSObject
  • Dictionary/Set of markers. It contains the inner codification

    bytesList contains the marker codewords where

    • bytesList.rows is the dictionary size
    • each marker is encoded using nbytes = ceil(markerSize*markerSize/8.)
    • each row contains all 4 rotations of the marker, so its length is 4*nbytes

    bytesList.ptr(i)[k*nbytes + j] is then the j-th byte of i-th marker, in its k-th rotation.

    Member of Aruco

    See more

    Declaration

    Objective-C

    @interface Dictionary : NSObject

    Swift

    class Dictionary : NSObject
  • Main interface for all disparity map filters.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface DisparityFilter : Algorithm

    Swift

    class DisparityFilter : Algorithm
  • Disparity map filter based on Weighted Least Squares filter (in form of Fast Global Smoother that is a lot faster than traditional Weighted Least Squares filter implementations) and optional use of left-right-consistency-based confidence to refine the results in half-occlusions and uniform areas.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface DisparityWLSFilter : DisparityFilter

    Swift

    class DisparityWLSFilter : DisparityFilter
  • Dnn

    Declaration

    Objective-C

    @interface Dnn : NSObject

    Swift

    class Dnn : NSObject
  • Simple wrapper for a vector of two double

    See more

    Declaration

    Objective-C

    @interface Double2 : NSObject

    Swift

    class Double2 : NSObject
  • Simple wrapper for a vector of three double

    See more

    Declaration

    Objective-C

    @interface Double3 : NSObject

    Swift

    class Double3 : NSObject
  • Utility class to wrap a std::vector<double>

    See more

    Declaration

    Objective-C

    @interface DoubleVector : NSObject

    Swift

    class DoubleVector : NSObject
  • EM

    The class implements the Expectation Maximization algorithm.

    See

    REF: ml_intro_em

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface EM : StatModel

    Swift

    class EM : StatModel
  • Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm CITE: Neumann12. :

    Extracts the component tree (if needed) and filter the extremal regions (ER’s) by using a given classifier.

    Member of Text

    Declaration

    Objective-C

    @interface ERFilter : Algorithm

    Swift

    class ERFilter : Algorithm
  • Callback with the classifier is made a class.

     By doing it we hide SVM, Boost etc. Developers can provide their own classifiers to the
     ERFilter algorithm.
    

    Member of Text

    Declaration

    Objective-C

    @interface ERFilterCallback : NSObject

    Swift

    class ERFilterCallback : NSObject
  • Sparse match interpolation algorithm based on modified locally-weighted affine estimator from CITE: Revaud2015 and Fast Global Smoother as post-processing filter.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface EdgeAwareInterpolator : SparseMatchInterpolator

    Swift

    class EdgeAwareInterpolator : SparseMatchInterpolator
  • Class implementing EdgeBoxes algorithm from CITE: ZitnickECCV14edgeBoxes :

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface EdgeBoxes : Algorithm

    Swift

    class EdgeBoxes : Algorithm
  • The EigenFaceRecognizer module

    Member of Face

    See more

    Declaration

    Objective-C

    @interface EigenFaceRecognizer : BasicFaceRecognizer

    Swift

    class EigenFaceRecognizer : BasicFaceRecognizer
  • Class implementing the FREAK (Fast Retina Keypoint) keypoint descriptor, described in CITE: AOV12 .

    The algorithm propose a novel keypoint descriptor inspired by the human visual system and more precisely the retina, coined Fast Retina Key- point (FREAK). A cascade of binary strings is computed by efficiently comparing image intensities over a retinal sampling pattern. FREAKs are in general faster to compute with lower memory load and also more robust than SIFT, SURF or BRISK. They are competitive alternatives to existing keypoints in particular for embedded applications.

    @note - An example on how to use the FREAK descriptor can be found at opencv_source_code/samples/cpp/freak_demo.cpp

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface FREAK : Feature2D

    Swift

    class FREAK : Feature2D
  • Declaration

    Objective-C

    @interface Face : NSObject

    Swift

    class Face : NSObject
  • Abstract base class for all face recognition models

    All face recognition models in OpenCV are derived from the abstract base class FaceRecognizer, which provides a unified access to all face recongition algorithms in OpenCV.

    ### Description

    I’ll go a bit more into detail explaining FaceRecognizer, because it doesn’t look like a powerful interface at first sight. But: Every FaceRecognizer is an Algorithm, so you can easily get/set all model internals (if allowed by the implementation). Algorithm is a relatively new OpenCV concept, which is available since the 2.4 release. I suggest you take a look at its description.

    Algorithm provides the following features for all derived classes:

    • So called “virtual constructor”. That is, each Algorithm derivative is registered at program start and you can get the list of registered algorithms and create instance of a particular algorithm by its name (see Algorithm::create). If you plan to add your own algorithms, it is good practice to add a unique prefix to your algorithms to distinguish them from other algorithms.
    • Setting/Retrieving algorithm parameters by name. If you used video capturing functionality from OpenCV highgui module, you are probably familar with cv::cvSetCaptureProperty, ocvcvGetCaptureProperty, VideoCapture::set and VideoCapture::get. Algorithm provides similar method where instead of integer id’s you specify the parameter names as text Strings. See Algorithm::set and Algorithm::get for details.
    • Reading and writing parameters from/to XML or YAML files. Every Algorithm derivative can store all its parameters and then read them back. There is no need to re-implement it each time.

    Moreover every FaceRecognizer supports the:

    • Training of a FaceRecognizer with FaceRecognizer::train on a given set of images (your face database!).
    • Prediction of a given sample image, that means a face. The image is given as a Mat.
    • Loading/Saving the model state from/to a given XML or YAML.
    • Setting/Getting labels info, that is stored as a string. String labels info is useful for keeping names of the recognized people.

    Note

    When using the FaceRecognizer interface in combination with Python, please stick to Python 2. Some underlying scripts like create_csv will not work in other versions, like Python 3. Setting the Thresholds +++++++++++++++++++++++

    Sometimes you run into the situation, when you want to apply a threshold on the prediction. A common scenario in face recognition is to tell, whether a face belongs to the training dataset or if it is unknown. You might wonder, why there’s no public API in FaceRecognizer to set the threshold for the prediction, but rest assured: It’s supported. It just means there’s no generic way in an abstract class to provide an interface for setting/getting the thresholds of every possible FaceRecognizer algorithm. The appropriate place to set the thresholds is in the constructor of the specific FaceRecognizer and since every FaceRecognizer is a Algorithm (see above), you can get/set the thresholds at runtime!

    Here is an example of setting a threshold for the Eigenfaces method, when creating the model:

    // Let’s say we want to keep 10 Eigenfaces and have a threshold value of 10.0 int num_components = 10; double threshold = 10.0; // Then if you want to have a cv::FaceRecognizer with a confidence threshold, // create the concrete implementation with the appropriate parameters: Ptr model = EigenFaceRecognizer::create(num_components, threshold);

    Sometimes it’s impossible to train the model, just to experiment with threshold values. Thanks to Algorithm it’s possible to set internal model thresholds during runtime. Let’s see how we would set/get the prediction for the Eigenface model, we’ve created above:

    // The following line reads the threshold from the Eigenfaces model: double current_threshold = model->getDouble(“threshold”); // And this line sets the threshold to 0.0: model->set(“threshold”, 0.0);

    If you’ve set the threshold to 0.0 as we did above, then:

    // Mat img = imread(“person1/3.jpg”, IMREAD_GRAYSCALE); // Get a prediction from the model. Note: We’ve set a threshold of 0.0 above, // since the distance is almost always larger than 0.0, you’ll get -1 as // label, which indicates, this face is unknown int predicted_label = model->predict(img); // …

    is going to yield -1 as predicted label, which states this face is unknown.

    ### Getting the name of a FaceRecognizer

    Since every FaceRecognizer is a Algorithm, you can use Algorithm::name to get the name of a FaceRecognizer:

    // Create a FaceRecognizer: Ptr model = EigenFaceRecognizer::create(); // And here’s how to get its name: String name = model->name();

    Member of Face

    See more

    Declaration

    Objective-C

    @interface FaceRecognizer : Algorithm

    Swift

    class FaceRecognizer : Algorithm
  • Abstract base class for all facemark models

    To utilize this API in your program, please take a look at the REF: tutorial_table_of_content_facemark ### Description

    Facemark is a base class which provides universal access to any specific facemark algorithm. Therefore, the users should declare a desired algorithm before they can use it in their application.

    Here is an example on how to declare a facemark algorithm:

    // Using Facemark in your code: Ptr facemark = createFacemarkLBF();

    The typical pipeline for facemark detection is as follows:

    • Load the trained model using Facemark::loadModel.
    • Perform the fitting on an image via Facemark::fit.

    Member of Face

    See more

    Declaration

    Objective-C

    @interface Facemark : Algorithm

    Swift

    class Facemark : Algorithm
  • The FacemarkAAM module

    Member of Face

    Declaration

    Objective-C

    @interface FacemarkAAM : FacemarkTrain

    Swift

    class FacemarkAAM : FacemarkTrain
  • The FacemarkKazemi module

    Member of Face

    Declaration

    Objective-C

    @interface FacemarkKazemi : Facemark

    Swift

    class FacemarkKazemi : Facemark
  • The FacemarkLBF module

    Member of Face

    Declaration

    Objective-C

    @interface FacemarkLBF : FacemarkTrain

    Swift

    class FacemarkLBF : FacemarkTrain
  • Abstract base class for trainable facemark models

    To utilize this API in your program, please take a look at the REF: tutorial_table_of_content_facemark ### Description

    The AAM and LBF facemark models in OpenCV are derived from the abstract base class FacemarkTrain, which provides a unified access to those facemark algorithms in OpenCV.

    Here is an example on how to declare facemark algorithm:

    // Using Facemark in your code: Ptr facemark = FacemarkLBF::create();

    The typical pipeline for facemark detection is listed as follows:

    • (Non-mandatory) Set a user defined face detection using FacemarkTrain::setFaceDetector. The facemark algorithms are designed to fit the facial points into a face. Therefore, the face information should be provided to the facemark algorithm. Some algorithms might provides a default face recognition function. However, the users might prefer to use their own face detector to obtains the best possible detection result.
    • (Non-mandatory) Training the model for a specific algorithm using FacemarkTrain::training. In this case, the model should be automatically saved by the algorithm. If the user already have a trained model, then this part can be omitted.
    • Load the trained model using Facemark::loadModel.
    • Perform the fitting via the Facemark::fit.

    Member of Face

    Declaration

    Objective-C

    @interface FacemarkTrain : Facemark

    Swift

    class FacemarkTrain : Facemark
  • Class computing a dense optical flow using the Gunnar Farneback’s algorithm.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface FarnebackOpticalFlow : DenseOpticalFlow

    Swift

    class FarnebackOpticalFlow : DenseOpticalFlow
  • Interface for implementations of Fast Bilateral Solver.

    For more details about this solver see CITE: BarronPoole2016 .

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface FastBilateralSolverFilter : Algorithm

    Swift

    class FastBilateralSolverFilter : Algorithm
  • Wrapping class for feature detection using the FAST method. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface FastFeatureDetector : Feature2D

    Swift

    class FastFeatureDetector : Feature2D
  • Interface for implementations of Fast Global Smoother filter.

    For more details about this filter see CITE: Min2014 and CITE: Farbman2008 .

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface FastGlobalSmootherFilter : Algorithm

    Swift

    class FastGlobalSmootherFilter : Algorithm
  • Class implementing the FLD (Fast Line Detector) algorithm described in CITE: Lee14 .

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface FastLineDetector : Algorithm

    Swift

    class FastLineDetector : Algorithm
  • Abstract base class for 2D image feature detectors and descriptor extractors

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface Feature2D : Algorithm

    Swift

    class Feature2D : Algorithm
  • The FisherFaceRecognizer module

    Member of Face

    See more

    Declaration

    Objective-C

    @interface FisherFaceRecognizer : BasicFaceRecognizer

    Swift

    class FisherFaceRecognizer : BasicFaceRecognizer
  • Flann-based descriptor matcher.

    This matcher trains cv::flann::Index on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. FlannBasedMatcher does not support masking permissible matches of descriptor sets because flann::Index does not support this. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface FlannBasedMatcher : DescriptorMatcher

    Swift

    class FlannBasedMatcher : DescriptorMatcher
  • Simple wrapper for a vector of four float

    See more

    Declaration

    Objective-C

    @interface Float4 : NSObject

    Swift

    class Float4 : NSObject
  • Simple wrapper for a vector of six float

    See more

    Declaration

    Objective-C

    @interface Float6 : NSObject

    Swift

    class Float6 : NSObject
  • Utility class to wrap a std::vector<float>

    See more

    Declaration

    Objective-C

    @interface FloatVector : NSObject

    Swift

    class FloatVector : NSObject
  • Wrapping class for feature detection using the goodFeaturesToTrack function. :

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface GFTTDetector : Feature2D

    Swift

    class GFTTDetector : Feature2D
  • finds arbitrary template in the grayscale image using Generalized Hough Transform

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface GeneralizedHough : Algorithm

    Swift

    class GeneralizedHough : Algorithm
  • finds arbitrary template in the grayscale image using Generalized Hough Transform

    Detects position only without translation and rotation CITE: Ballard1981 .

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface GeneralizedHoughBallard : GeneralizedHough

    Swift

    class GeneralizedHoughBallard : GeneralizedHough
  • finds arbitrary template in the grayscale image using Generalized Hough Transform

    Detects position, translation and rotation CITE: Guil1999 .

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface GeneralizedHoughGuil : GeneralizedHough

    Swift

    class GeneralizedHoughGuil : GeneralizedHough
  • Graph Based Segmentation Algorithm. The class implements the algorithm described in CITE: PFF2004 .

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface GraphSegmentation : Algorithm

    Swift

    class GraphSegmentation : Algorithm
  • Class implementing the Gray-code pattern, based on CITE: UNDERWORLD.

    The generation of the pattern images is performed with Gray encoding using the traditional white and black colors.

    The information about the two image axes x, y is encoded separately into two different pattern sequences. A projector P with resolution (P_res_x, P_res_y) will result in Ncols = log 2 (P_res_x) encoded pattern images representing the columns, and in Nrows = log 2 (P_res_y) encoded pattern images representing the rows. For example a projector with resolution 1024x768 will result in Ncols = 10 and Nrows = 10.

    However, the generated pattern sequence consists of both regular color and color-inverted images: inverted pattern images are images with the same structure as the original but with inverted colors. This provides an effective method for easily determining the intensity value of each pixel when it is lit (highest value) and when it is not lit (lowest value). So for a a projector with resolution 1024x768, the number of pattern images will be Ncols * 2 + Nrows * 2 = 40.

    Member of Structured_light

    See more

    Declaration

    Objective-C

    @interface GrayCodePattern : StructuredLightPattern

    Swift

    class GrayCodePattern : StructuredLightPattern
  • Gray-world white balance algorithm

    This algorithm scales the values of pixels based on a gray-world assumption which states that the average of all channels should result in a gray image.

    It adds a modification which thresholds pixels based on their saturation value and only uses pixels below the provided threshold in finding average pixel values.

    Saturation is calculated using the following for a 3-channel RGB image per pixel I and is in the range [0, 1]:

    \texttt{Saturation} [I] = \frac{\textrm{max}(R,G,B) - \textrm{min}(R,G,B) }{\textrm{max}(R,G,B)}

    A threshold of 1 means that all pixels are used to white-balance, while a threshold of 0 means no pixels are used. Lower thresholds are useful in white-balancing saturated images.

    Currently supports images of type REF: CV_8UC3 and REF: CV_16UC3.

    Member of Xphoto

    See more

    Declaration

    Objective-C

    @interface GrayworldWB : WhiteBalancer

    Swift

    class GrayworldWB : WhiteBalancer
  • Planar board with grid arrangement of markers More common type of board. All markers are placed in the same plane in a grid arrangement. The board can be drawn using drawPlanarBoard() function (- see: drawPlanarBoard)

    Member of Aruco

    See more

    Declaration

    Objective-C

    @interface GridBoard : Board

    Swift

    class GridBoard : Board
  • Interface for realizations of Guided Filter.

    For more details about this filter see CITE: Kaiming10 .

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface GuidedFilter : Algorithm

    Swift

    class GuidedFilter : Algorithm
  • Declaration

    Objective-C

    @interface HOGDescriptor : NSObject

    Swift

    class HOGDescriptor : NSObject
  • Class implementing the Harris-Laplace feature detector as described in CITE: Mikolajczyk2004.

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface HarrisLaplaceFeatureDetector : Feature2D

    Swift

    class HarrisLaplaceFeatureDetector : Feature2D
  • Class implementing two-dimensional phase unwrapping based on CITE: histogramUnwrapping This algorithm belongs to the quality-guided phase unwrapping methods. First, it computes a reliability map from second differences between a pixel and its eight neighbours. Reliability values lie between 0 and 16*pi*pi. Then, this reliability map is used to compute the reliabilities of “edges”. An edge is an entity defined by two pixels that are connected horizontally or vertically. Its reliability is found by adding the the reliabilities of the two pixels connected through it. Edges are sorted in a histogram based on their reliability values. This histogram is then used to unwrap pixels, starting from the highest quality pixel.

    The wrapped phase map and the unwrapped result are stored in CV_32FC1 Mat.

    Member of Phase_unwrapping

    See more

    Declaration

    Objective-C

    @interface HistogramPhaseUnwrapping : PhaseUnwrapping

    Swift

    class HistogramPhaseUnwrapping : PhaseUnwrapping
  • Parameters of phaseUnwrapping constructor.

    width Phase map width. height Phase map height. histThresh Bins in the histogram are not of equal size. Default value is 3*pi*pi. The one before “histThresh” value are smaller. nbrOfSmallBins Number of bins between 0 and “histThresh”. Default value is 10. nbrOfLargeBins Number of bins between “histThresh” and 32*pi*pi (highest edge reliability value). Default value is 5.

    Member of Phase_unwrapping

    See more

    Declaration

    Objective-C

    @interface HistogramPhaseUnwrappingParams : NSObject

    Swift

    class HistogramPhaseUnwrappingParams : NSObject
  • The base class for image hash algorithms

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface ImgHashBase : Algorithm

    Swift

    class ImgHashBase : Algorithm
  • Declaration

    Objective-C

    @interface Img_hash : NSObject

    Swift

    class Img_hash : NSObject
  • The Imgcodecs module

    Member of Imgcodecs Member enums: ImreadModes, ImwriteFlags, ImwriteEXRTypeFlags, ImwritePNGFlags, ImwritePAMFlags

    See more

    Declaration

    Objective-C

    @interface Imgcodecs : NSObject

    Swift

    class Imgcodecs : NSObject
  • Simple wrapper for a vector of four int

    See more

    Declaration

    Objective-C

    @interface Int4 : NSObject

    Swift

    class Int4 : NSObject
  • Utility class to wrap a std::vector<int>

    See more

    Declaration

    Objective-C

    @interface IntVector : NSObject

    Swift

    class IntVector : NSObject
  • Class implementing the KAZE keypoint detector and descriptor extractor, described in CITE: ABD12 .

    Note

    AKAZE descriptor can only be used with KAZE or AKAZE keypoints .. [ABD12] KAZE Features. Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. In European Conference on Computer Vision (ECCV), Fiorenze, Italy, October 2012.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface KAZE : Feature2D

    Swift

    class KAZE : Feature2D
  • The class implements K-Nearest Neighbors model

    See

    REF: ml_intro_knn

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface KNearest : StatModel

    Swift

    class KNearest : StatModel
  • Kalman filter class.

    The class implements a standard Kalman filter http://en.wikipedia.org/wiki/Kalman_filter, CITE: Welch95 . However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality.

    Note

    In C API when CvKalman* kalmanFilter structure is not needed anymore, it should be released with cvReleaseKalman(&kalmanFilter)

    Member of Video

    See more

    Declaration

    Objective-C

    @interface KalmanFilter : NSObject

    Swift

    class KalmanFilter : NSObject
  • Object representing a point feature found by one of many available keypoint detectors, such as Harris corner detector, FAST, StarDetector, SURF, SIFT etc.

    See more

    Declaration

    Objective-C

    @interface KeyPoint : NSObject

    Swift

    class KeyPoint : NSObject
  • This class represents high-level API for keypoints models

    KeypointsModel allows to set params for preprocessing input image. KeypointsModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the x and y coordinates of each detected keypoint

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface KeypointsModel : Model

    Swift

    class KeypointsModel : Model
  • latch Class for computing the LATCH descriptor. If you find this code useful, please add a reference to the following paper in your work: Gil Levi and Tal Hassner, “LATCH: Learned Arrangements of Three Patch Codes”, arXiv preprint arXiv:1501.03719, 15 Jan. 2015

    LATCH is a binary descriptor based on learned comparisons of triplets of image patches.

    bytes is the size of the descriptor - can be 64, 32, 16, 8, 4, 2 or 1 rotationInvariance - whether or not the descriptor should compansate for orientation changes. half_ssd_size - the size of half of the mini-patches size. For example, if we would like to compare triplets of patches of size 7x7x then the half_ssd_size should be (7-1)/2 = 3. sigma - sigma value for GaussianBlur smoothing of the source image. Source image will be used without smoothing in case sigma value is 0.

    Note: the descriptor can be coupled with any keypoint extractor. The only demand is that if you use set rotationInvariance = True then you will have to use an extractor which estimates the patch orientation (in degrees). Examples for such extractors are ORB and SIFT.

    Note: a complete example can be found under /samples/cpp/tutorial_code/xfeatures2D/latch_match.cpp

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface LATCH : Feature2D

    Swift

    class LATCH : Feature2D
  • The LBPHFaceRecognizer module

    Member of Face

    See more

    Declaration

    Objective-C

    @interface LBPHFaceRecognizer : FaceRecognizer

    Swift

    class LBPHFaceRecognizer : FaceRecognizer
  • Class implementing the locally uniform comparison image descriptor, described in CITE: LUCID

    An image descriptor that can be computed very fast, while being about as robust as, for example, SURF or BRIEF.

    Note

    It requires a color image as input.

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface LUCID : Feature2D

    Swift

    class LUCID : Feature2D
  • This interface class allows to build new Layers - are building blocks of networks.

    Each class, derived from Layer, must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of REF: dnnLayerFactory “LayerFactory” macros.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface Layer : Algorithm

    Swift

    class Layer : Algorithm
  • More sophisticated learning-based automatic white balance algorithm.

    As REF: GrayworldWB, this algorithm works by applying different gains to the input image channels, but their computation is a bit more involved compared to the simple gray-world assumption. More details about the algorithm can be found in CITE: Cheng2015 .

    To mask out saturated pixels this function uses only pixels that satisfy the following condition:

    \frac{\textrm{max}(R,G,B)}{\texttt{range\_max\_val}} < \texttt{saturation\_thresh}

    Currently supports images of type REF: CV_8UC3 and REF: CV_16UC3.

    Member of Xphoto

    See more

    Declaration

    Objective-C

    @interface LearningBasedWB : WhiteBalancer

    Swift

    class LearningBasedWB : WhiteBalancer
  • Line segment detector class

    following the algorithm described at CITE: Rafael12 .

    Note

    Implementation has been removed due original code license conflict

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface LineSegmentDetector : Algorithm

    Swift

    class LineSegmentDetector : Algorithm
  • Implements Logistic Regression classifier.

    See

    REF: ml_intro_lr

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface LogisticRegression : StatModel

    Swift

    class LogisticRegression : StatModel
  • Minimum Average Correlation Energy Filter useful for authentication with (cancellable) biometrical features. (does not need many positives to train (10-50), and no negatives at all, also robust to noise/salting)

     see also: CITE: Savvides04
    
     this implementation is largely based on: https://code.google.com/archive/p/pam-face-authentication (GSOC 2009)
    
     use it like:
    
    
     Ptr<face::MACE> mace = face::MACE::create(64);
    
     vector<Mat> pos_images = ...
     mace->train(pos_images);
    
     Mat query = ...
     bool same = mace->same(query);
    
    
    
     you can also use two-factor authentication, with an additional passphrase:
    
    
     String owners_passphrase = "ilikehotdogs";
     Ptr<face::MACE> mace = face::MACE::create(64);
     mace->salt(owners_passphrase);
     vector<Mat> pos_images = ...
     mace->train(pos_images);
    
     // now, users have to give a valid passphrase, along with the image:
     Mat query = ...
     cout << "enter passphrase: ";
     string pass;
     getline(cin, pass);
     mace->salt(pass);
     bool same = mace->same(query);
    
    
     save/load your model:
    
     Ptr<face::MACE> mace = face::MACE::create(64);
     mace->train(pos_images);
     mace->save("my_mace.xml");
    
     // later:
     Ptr<MACE> reloaded = MACE::load("my_mace.xml");
     reloaded->same(some_image);
    

    Member of Face

    See more

    Declaration

    Objective-C

    @interface MACE : Algorithm

    Swift

    class MACE : Algorithm
  • Class implementing the MSD (Maximal Self-Dissimilarity) keypoint detector, described in CITE: Tombari14.

    The algorithm implements a novel interest point detector stemming from the intuition that image patches which are highly dissimilar over a relatively large extent of their surroundings hold the property of being repeatable and distinctive. This concept of “contextual self-dissimilarity” reverses the key paradigm of recent successful techniques such as the Local Self-Similarity descriptor and the Non-Local Means filter, which build upon the presence of similar - rather than dissimilar - patches. Moreover, it extends to contextual information the local self-dissimilarity notion embedded in established detectors of corner-like interest points, thereby achieving enhanced repeatability, distinctiveness and localization accuracy.

    Member of Xfeatures2d

    Declaration

    Objective-C

    @interface MSDDetector : Feature2D

    Swift

    class MSDDetector : Feature2D
  • Maximally stable extremal region extractor

    The class encapsulates all the parameters of the %MSER extraction algorithm (see wiki article).

    • there are two different implementation of %MSER: one for grey image, one for color image

    • the grey image algorithm is taken from: CITE: nister2008linear ; the paper claims to be faster than union-find method; it actually get 1.5~2m/s on my centrino L7200 1.2GHz laptop.

    • the color image algorithm is taken from: CITE: forssen2007maximally ; it should be much slower than grey image method ( 3~4 times ); the chi_table.h file is taken directly from paper’s source code which is distributed under GPL.

    • (Python) A complete example showing the use of the %MSER detector can be found at samples/python/mser.py

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface MSER : Feature2D

    Swift

    class MSER : Feature2D
  • Marr-Hildreth Operator Based Hash, slowest but more discriminative.

    See CITE: zauner2010implementation for details.

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface MarrHildrethHash : ImgHashBase

    Swift

    class MarrHildrethHash : ImgHashBase
  • Mat

    The class Mat represents an n-dimensional dense numerical single-channel or multi-channel array.

    See more

    Declaration

    Objective-C

    @interface Mat : NSObject

    Swift

    class Mat : NSObject
  • Mat representation of an array of bytes

    See more

    Declaration

    Objective-C

    @interface MatOfByte : Mat

    Swift

    class MatOfByte : Mat
  • Mat representation of an array of DMatch objects

    See more

    Declaration

    Objective-C

    @interface MatOfDMatch : Mat

    Swift

    class MatOfDMatch : Mat
  • Mat representation of an array of doubles

    See more

    Declaration

    Objective-C

    @interface MatOfDouble : Mat

    Swift

    class MatOfDouble : Mat
  • Mat representation of an array of floats

    See more

    Declaration

    Objective-C

    @interface MatOfFloat : Mat

    Swift

    class MatOfFloat : Mat
  • Mat representation of an array of vectors of four floats

    See more

    Declaration

    Objective-C

    @interface MatOfFloat4 : Mat

    Swift

    class MatOfFloat4 : Mat
  • Mat representation of an array of vectors of six floats

    See more

    Declaration

    Objective-C

    @interface MatOfFloat6 : Mat

    Swift

    class MatOfFloat6 : Mat
  • Mat representation of an array of ints

    See more

    Declaration

    Objective-C

    @interface MatOfInt : Mat

    Swift

    class MatOfInt : Mat
  • Mat representation of an array of vectors of four ints

    See more

    Declaration

    Objective-C

    @interface MatOfInt4 : Mat

    Swift

    class MatOfInt4 : Mat
  • Mat representation of an array of KeyPoint objects

    See more

    Declaration

    Objective-C

    @interface MatOfKeyPoint : Mat

    Swift

    class MatOfKeyPoint : Mat
  • Mat representation of an array of Point2f objects

    See more

    Declaration

    Objective-C

    @interface MatOfPoint2f : Mat

    Swift

    class MatOfPoint2f : Mat
  • Mat representation of an array of Point objects

    See more

    Declaration

    Objective-C

    
    @interface MatOfPoint2i : Mat

    Swift

    class MatOfPoint : Mat
  • Mat representation of an array of Point3i objects

    See more

    Declaration

    Objective-C

    @interface MatOfPoint3 : Mat

    Swift

    class MatOfPoint3 : Mat
  • Mat representation of an array of Point3f objects

    See more

    Declaration

    Objective-C

    @interface MatOfPoint3f : Mat

    Swift

    class MatOfPoint3f : Mat
  • Mat representation of an array of Rect2d objects

    See more

    Declaration

    Objective-C

    @interface MatOfRect2d : Mat

    Swift

    class MatOfRect2d : Mat
  • Mat representation of an array of Rect objects

    See more

    Declaration

    Objective-C

    
    @interface MatOfRect2i : Mat

    Swift

    class MatOfRect : Mat
  • Mat representation of an array of RotatedRect objects

    See more

    Declaration

    Objective-C

    @interface MatOfRotatedRect : Mat

    Swift

    class MatOfRotatedRect : Mat
  • The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response.

    For more information see CITE: DM97 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeDebevec : MergeExposures

    Swift

    class MergeDebevec : MergeExposures
  • The base class algorithms that can merge exposure sequence to a single image.

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeExposures : Algorithm

    Swift

    class MergeExposures : Algorithm
  • Pixels are weighted using contrast, saturation and well-exposedness measures, than images are combined using laplacian pyramids.

    The resulting image weight is constructed as weighted average of contrast, saturation and well-exposedness measures.

    The resulting image doesn’t require tonemapping and can be converted to 8-bit image by multiplying by 255, but it’s recommended to apply gamma correction and/or linear tonemapping.

    For more information see CITE: MK07 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeMertens : MergeExposures

    Swift

    class MergeMertens : MergeExposures
  • The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response.

    For more information see CITE: RB99 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface MergeRobertson : MergeExposures

    Swift

    class MergeRobertson : MergeExposures
  • Result of operation to determine global minimum and maximum of an array

    See more

    Declaration

    Objective-C

    @interface MinMaxLocResult : NSObject

    Swift

    class MinMaxLocResult : NSObject
  • Ml
  • This class is presented high-level API for neural networks.

    Model allows to set params for preprocessing input image. Model creates net from file with trained weights and config, sets preprocessing input and runs forward pass.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface Model : Net

    Swift

    class Model : Net
  • Declaration

    Objective-C

    @interface Moments : NSObject
    
    @property double m00;
    @property double m10;
    @property double m01;
    @property double m20;
    @property double m11;
    @property double m02;
    @property double m30;
    @property double m21;
    @property double m12;
    @property double m03;
    
    @property double mu20;
    @property double mu11;
    @property double mu02;
    @property double mu30;
    @property double mu21;
    @property double mu12;
    @property double mu03;
    
    @property double nu20;
    @property double nu11;
    @property double nu02;
    @property double nu30;
    @property double nu21;
    @property double nu12;
    @property double nu03;
    
    #ifdef __cplusplus
    @property(readonly) cv::Moments& nativeRef;
    #endif
    
    -(instancetype)initWithM00:(double)m00 m10:(double)m10 m01:(double)m01 m20:(double)m20 m11:(double)m11 m02:(double)m02 m30:(double)m30 m21:(double)m21 m12:(double)m12 m03:(double)m03;
    
    -(instancetype)init;
    
    -(instancetype)initWithVals:(NSArray<NSNumber*>*)vals;
    
    #ifdef __cplusplus
    +(instancetype)fromNative:(cv::Moments&)moments;
    #endif
    
    -(void)set:(NSArray<NSNumber*>*)vals;
    -(void)completeState;
    -(NSString *)description;
    
    @end

    Swift

    class Moments : NSObject
  • This class is used to track multiple objects using the specified tracker algorithm.

    The %MultiTracker is naive implementation of multiple object tracking. It process the tracked objects independently without any optimization accross the tracked objects.

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface MultiTracker : Algorithm

    Swift

    class MultiTracker : Algorithm
  • Net

    This class allows to create and manipulate comprehensive artificial neural networks.

    Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.

    Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.

    This class supports reference counting of its instances, i. e. copies point to the same instance.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface Net : NSObject

    Swift

    class Net : NSObject
  • Bayes classifier for normally distributed data.

    See

    REF: ml_intro_bayes

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface NormalBayesClassifier : StatModel

    Swift

    class NormalBayesClassifier : StatModel
  • OCRBeamSearchDecoder class provides an interface for OCR using Beam Search algorithm.

    @note - (C++) An example on using OCRBeamSearchDecoder recognition combined with scene text detection can be found at the demo sample: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/word_recognition.cpp

    Member of Text

    See more

    Declaration

    Objective-C

    @interface OCRBeamSearchDecoder : BaseOCR

    Swift

    class OCRBeamSearchDecoder : BaseOCR
  • Callback with the character classifier is made a class.

     This way it hides the feature extractor and the classifier itself, so developers can write
     their own OCR code.
    
     The default character classifier and feature extractor can be loaded using the utility function
     loadOCRBeamSearchClassifierCNN with all its parameters provided in
     <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRBeamSearch_CNN_model_data.xml.gz>.
    

    Member of Text

    Declaration

    Objective-C

    @interface OCRBeamSearchDecoderClassifierCallback : NSObject

    Swift

    class OCRBeamSearchDecoderClassifierCallback : NSObject
  • OCRHMMDecoder class provides an interface for OCR using Hidden Markov Models.

    @note - (C++) An example on using OCRHMMDecoder recognition combined with scene text detection can be found at the webcam_demo sample: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp

    Member of Text

    See more

    Declaration

    Objective-C

    @interface OCRHMMDecoder : BaseOCR

    Swift

    class OCRHMMDecoder : BaseOCR
  • Callback with the character classifier is made a class.

     This way it hides the feature extractor and the classifier itself, so developers can write
     their own OCR code.
    
     The default character classifier and feature extractor can be loaded using the utility function
     loadOCRHMMClassifierNM and KNN model provided in
     <https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/OCRHMM_knn_model_data.xml.gz>.
    

    Member of Text

    Declaration

    Objective-C

    @interface OCRHMMDecoderClassifierCallback : NSObject

    Swift

    class OCRHMMDecoderClassifierCallback : NSObject
  • OCRTesseract class provides an interface with the tesseract-ocr API (v3.02.02) in C++.

    Notice that it is compiled only when tesseract-ocr is correctly installed.

    @note - (C++) An example of OCRTesseract recognition combined with scene text detection can be found at the end_to_end_recognition demo: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/end_to_end_recognition.cpp - (C++) Another example of OCRTesseract recognition combined with scene text detection can be found at the webcam_demo: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp

    Member of Text

    See more

    Declaration

    Objective-C

    @interface OCRTesseract : BaseOCR

    Swift

    class OCRTesseract : BaseOCR
  • ORB

    Class implementing the ORB (oriented BRIEF) keypoint detector and descriptor extractor

    described in CITE: RRKB11 . The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface ORB : Feature2D

    Swift

    class ORB : Feature2D
  • Declaration

    Objective-C

    @interface Objdetect : NSObject

    Swift

    class Objdetect : NSObject
  • Class implementing PCT (position-color-texture) signature extraction as described in CITE: KrulisLS16. The algorithm is divided to a feature sampler and a clusterizer. Feature sampler produces samples at given set of coordinates. Clusterizer then produces clusters of these samples using k-means algorithm. Resulting set of clusters is the signature of the input image.

    A signature is an array of SIGNATURE_DIMENSION-dimensional points. Used dimensions are: weight, x, y position; lab color, contrast, entropy. CITE: KrulisLS16 CITE: BeecksUS10

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface PCTSignatures : Algorithm

    Swift

    class PCTSignatures : Algorithm
  • Class implementing Signature Quadratic Form Distance (SQFD).

    See

    Christian Beecks, Merih Seran Uysal, Thomas Seidl. Signature quadratic form distance. In Proceedings of the ACM International Conference on Image and Video Retrieval, pages 438-445. ACM, 2010. CITE: BeecksUS10

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface PCTSignaturesSQFD : Algorithm

    Swift

    class PCTSignaturesSQFD : Algorithm
  • pHash

    Slower than average_hash, but tolerant of minor modifications

    This algorithm can combat more variation than averageHash, for more details please refer to CITE: lookslikeit

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface PHash : ImgHashBase

    Swift

    class PHash : ImgHashBase
  • The structure represents the logarithmic grid range of statmodel parameters.

    It is used for optimizing statmodel accuracy by varying model parameters, the accuracy estimate being computed by cross-validation.

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface ParamGrid : NSObject

    Swift

    class ParamGrid : NSObject
  • Abstract base class for phase unwrapping.

    Member of Phase_unwrapping

    See more

    Declaration

    Objective-C

    @interface PhaseUnwrapping : Algorithm

    Swift

    class PhaseUnwrapping : Algorithm
  • The Phase_unwrapping module

    Member classes: HistogramPhaseUnwrapping, HistogramPhaseUnwrappingParams, PhaseUnwrapping

    Declaration

    Objective-C

    @interface Phase_unwrapping : NSObject

    Swift

    class Phase_unwrapping : NSObject
  • Declaration

    Objective-C

    @interface Photo : NSObject

    Swift

    class Photo : NSObject
  • The Plot module

    Member classes: Plot2d

    Declaration

    Objective-C

    @interface Plot : NSObject

    Swift

    class Plot : NSObject
  • plot Plot function for Mat data

    Member of Plot

    See more

    Declaration

    Objective-C

    @interface Plot2d : Algorithm

    Swift

    class Plot2d : Algorithm
  • Represents a two dimensional point the coordinate values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Point2d : NSObject

    Swift

    class Point2d : NSObject
  • Represents a two dimensional point the coordinate values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Point2f : NSObject

    Swift

    class Point2f : NSObject
  • Represents a two dimensional point the coordinate values of which are of type int

    See more

    Declaration

    Objective-C

    
    @interface Point2i : NSObject

    Swift

    class Point : NSObject
  • Represents a three dimensional point the coordinate values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Point3d : NSObject

    Swift

    class Point3d : NSObject
  • Represents a three dimensional point the coordinate values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Point3f : NSObject

    Swift

    class Point3f : NSObject
  • Represents a three dimensional point the coordinate values of which are of type int

    See more

    Declaration

    Objective-C

    @interface Point3i : NSObject

    Swift

    class Point3i : NSObject
  • Abstract base class for all strategies of prediction result handling

    Member of Face

    Declaration

    Objective-C

    @interface PredictCollector : NSObject

    Swift

    class PredictCollector : NSObject
  • Groups the object candidate rectangles. rectList Input/output vector of rectangles. Output vector includes retained and grouped rectangles. (The Python list is not modified in place.) weights Input/output vector of weights of rectangles. Output vector includes weights of retained and grouped rectangles. (The Python list is not modified in place.) groupThreshold Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. eps Relative difference between sides of the rectangles to merge them into a group.

    Member of Objdetect

    See more

    Declaration

    Objective-C

    @interface QRCodeDetector : NSObject

    Swift

    class QRCodeDetector : NSObject
  • Jun 17, 2014 @author Yury Gitman

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface RFFeatureGetter : Algorithm

    Swift

    class RFFeatureGetter : Algorithm
  • Sparse match interpolation algorithm based on modified piecewise locally-weighted affine estimator called Robust Interpolation method of Correspondences or RIC from CITE: Hu2017 and Variational and Fast Global Smoother as post-processing filter. The RICInterpolator is a extension of the EdgeAwareInterpolator. Main concept of this extension is an piece-wise affine model based on over-segmentation via SLIC superpixel estimation. The method contains an efficient propagation mechanism to estimate among the pieces-wise models.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface RICInterpolator : SparseMatchInterpolator

    Swift

    class RICInterpolator : SparseMatchInterpolator
  • The class implements the random forest predictor.

    See

    REF: ml_intro_rtrees

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface RTrees : DTrees

    Swift

    class RTrees : DTrees
  • Image hash based on Radon transform.

    See CITE: tang2012perceptual for details.

    Member of Img_hash

    See more

    Declaration

    Objective-C

    @interface RadialVarianceHash : ImgHashBase

    Swift

    class RadialVarianceHash : ImgHashBase
  • Represents a range of dimension indices

    See more

    Declaration

    Objective-C

    @interface Range : NSObject

    Swift

    class Range : NSObject
  • Represents a rectange the coordinate and dimension values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Rect2d : NSObject

    Swift

    class Rect2d : NSObject
  • Represents a rectange the coordinate and dimension values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Rect2f : NSObject

    Swift

    class Rect2f : NSObject
  • Represents a rectange the coordinate and dimension values of which are of type int

    See more

    Declaration

    Objective-C

    
    @interface Rect2i : NSObject

    Swift

    class Rect : NSObject
  • class which allows the Gipsa/Listic Labs model to be used with OpenCV.

    This retina model allows spatio-temporal image processing (applied on still images, video sequences). As a summary, these are the retina model properties:

    • It applies a spectral whithening (mid-frequency details enhancement)
    • high frequency spatio-temporal noise reduction
    • low frequency luminance to be reduced (luminance range compression)
    • local logarithmic luminance compression allows details to be enhanced in low light conditions

    USE : this model can be used basically for spatio-temporal video effects but also for : _using the getParvo method output matrix : texture analysiswith enhanced signal to noise ratio and enhanced details robust against input images luminance ranges _using the getMagno method output matrix : motion analysis also with the previously cited properties

    for more information, reer to the following papers : Benoit A., Caplier A., Durette B., Herault, J., “USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING”, Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011 Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.

    The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author : take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). “Efficient demosaicing through recursive filtering”, IEEE International Conference on Image Processing ICIP 2007 take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny’s discussions. more informations in the above cited Jeanny Heraults’s book.

    Member of Bioinspired

    See more

    Declaration

    Objective-C

    @interface Retina : Algorithm

    Swift

    class Retina : Algorithm
  • a wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV.

    This algorithm is already implemented in thre Retina class (retina::applyFastToneMapping) but used it does not require all the retina model to be allocated. This allows a light memory use for low memory devices (smartphones, etc. As a summary, these are the model properties:

    • 2 stages of local luminance adaptation with a different local neighborhood for each.
    • first stage models the retina photorecetors local luminance adaptation
    • second stage models th ganglion cells local information adaptation
    • compared to the initial publication, this class uses spatio-temporal low pass filters instead of spatial only filters. this can help noise robustness and temporal stability for video sequence use cases.

    for more information, read to the following papers : Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816Benoit A., Caplier A., Durette B., Herault, J., “USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING”, Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011 regarding spatio-temporal filter and the bigger retina model : Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.

    Member of Bioinspired

    See more

    Declaration

    Objective-C

    @interface RetinaFastToneMapping : Algorithm

    Swift

    class RetinaFastToneMapping : Algorithm
  • Applies Ridge Detection Filter to an input image. Implements Ridge detection similar to the one in Mathematica using the eigen values from the Hessian Matrix of the input image using Sobel Derivatives. Additional refinement can be done using Skeletonization and Binarization. Adapted from CITE: segleafvein and CITE: M_RF

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface RidgeDetectionFilter : Algorithm

    Swift

    class RidgeDetectionFilter : Algorithm
  • Represents a rotated rectangle on a plane

    See more

    Declaration

    Objective-C

    @interface RotatedRect : NSObject

    Swift

    class RotatedRect : NSObject
  • Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm by D. Lowe CITE: Lowe04 .

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface SIFT : Feature2D

    Swift

    class SIFT : Feature2D
  • Class for extracting Speeded Up Robust Features from an image CITE: Bay06 .

    The algorithm parameters:

    • member int extended
      • 0 means that the basic descriptors (64 elements each) shall be computed
      • 1 means that the extended descriptors (128 elements each) shall be computed
    • member int upright
      • 0 means that detector computes orientation of each feature.
      • 1 means that the orientation is not computed (which is much, much faster). For example, if you match images from a stereo pair, or do image stitching, the matched features likely have very similar angles, and you can speed up feature extraction by setting upright=1.
    • member double hessianThreshold Threshold for the keypoint detector. Only features, whose hessian is larger than hessianThreshold are retained by the detector. Therefore, the larger the value, the less keypoints you will get. A good default value could be from 300 to 500, depending from the image contrast.
    • member int nOctaves The number of a gaussian pyramid octaves that the detector uses. It is set to 4 by default. If you want to get very large features, use the larger value. If you want just small features, decrease it.
    • member int nOctaveLayers The number of images within each octave of a gaussian pyramid. It is set to 2 by default. @note
      • An example using the SURF feature detector can be found at opencv_source_code/samples/cpp/generic_descriptor_match.cpp
        • Another example using the SURF feature detector, extractor and matcher can be found at opencv_source_code/samples/cpp/matcher_simple.cpp

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface SURF : Feature2D

    Swift

    class SURF : Feature2D
  • SVM

    Support Vector Machines.

    See

    REF: ml_intro_svm

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface SVM : StatModel

    Swift

    class SVM : StatModel
  • **********************************************************************************\ Stochastic Gradient Descent SVM Classifier * ************************************************************************************

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface SVMSGD : StatModel

    Swift

    class SVMSGD : StatModel
  • Represents a four element vector

    See more

    Declaration

    Objective-C

    @interface Scalar : NSObject

    Swift

    class Scalar : NSObject
  • This class represents high-level API for segmentation models

    SegmentationModel allows to set params for preprocessing input image. SegmentationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the class prediction for each pixel.

    Member of Dnn

    See more

    Declaration

    Objective-C

    @interface SegmentationModel : Model

    Swift

    class SegmentationModel : Model
  • Selective search segmentation algorithm The class implements the algorithm described in CITE: uijlings2013selective.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentation : Algorithm

    Swift

    class SelectiveSearchSegmentation : Algorithm
  • Strategie for the selective search segmentation algorithm The class implements a generic stragery for the algorithm described in CITE: uijlings2013selective.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentationStrategy : Algorithm

    Swift

    class SelectiveSearchSegmentationStrategy : Algorithm
  • Color-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in CITE: uijlings2013selective.

    Member of Ximgproc

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentationStrategyColor
        : SelectiveSearchSegmentationStrategy

    Swift

    class SelectiveSearchSegmentationStrategyColor : SelectiveSearchSegmentationStrategy
  • Fill-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in CITE: uijlings2013selective.

    Member of Ximgproc

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentationStrategyFill
        : SelectiveSearchSegmentationStrategy

    Swift

    class SelectiveSearchSegmentationStrategyFill : SelectiveSearchSegmentationStrategy
  • Regroup multiple strategies for the selective search segmentation algorithm

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentationStrategyMultiple
        : SelectiveSearchSegmentationStrategy

    Swift

    class SelectiveSearchSegmentationStrategyMultiple : SelectiveSearchSegmentationStrategy
  • Size-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in CITE: uijlings2013selective.

    Member of Ximgproc

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentationStrategySize
        : SelectiveSearchSegmentationStrategy

    Swift

    class SelectiveSearchSegmentationStrategySize : SelectiveSearchSegmentationStrategy
  • Texture-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in CITE: uijlings2013selective.

    Member of Ximgproc

    Declaration

    Objective-C

    @interface SelectiveSearchSegmentationStrategyTexture
        : SelectiveSearchSegmentationStrategy

    Swift

    class SelectiveSearchSegmentationStrategyTexture : SelectiveSearchSegmentationStrategy
  • Class for extracting blobs from an image. :

    The class implements a simple algorithm for extracting blobs from an image:

    1. Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds.
    2. Extract connected components from every binary image by findContours and calculate their centers.
    3. Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter.
    4. From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.

    This class performs several filtrations of returned blobs. You should set filterBy* to true/false to turn on/off corresponding filtration. Available filtrations:

    • By color. This filter compares the intensity of a binary image at the center of a blob to blobColor. If they differ, the blob is filtered out. Use blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.
    • By area. Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
    • By circularity. Extracted blobs have circularity (
      \frac{4*\pi*Area}{perimeter * perimeter}
      ) between minCircularity (inclusive) and maxCircularity (exclusive).
    • By ratio of the minimum inertia to maximum inertia. Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).
    • By convexity. Extracted blobs have convexity (area / area of blob convex hull) between minConvexity (inclusive) and maxConvexity (exclusive).

    Default values of parameters are tuned to extract dark circular blobs.

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface SimpleBlobDetector : Feature2D

    Swift

    class SimpleBlobDetector : Feature2D
  • The Params module

    Member of Features2d

    See more

    Declaration

    Objective-C

    @interface SimpleBlobDetectorParams : NSObject

    Swift

    class SimpleBlobDetectorParams : NSObject
  • A simple white balance algorithm that works by independently stretching each of the input image channels to the specified range. For increased robustness it ignores the top and bottom

    p\%
    of pixel values.

    Member of Xphoto

    See more

    Declaration

    Objective-C

    @interface SimpleWB : WhiteBalancer

    Swift

    class SimpleWB : WhiteBalancer
  • Class implementing Fourier transform profilometry (FTP) , phase-shifting profilometry (PSP) and Fourier-assisted phase-shifting profilometry (FAPS) based on CITE: faps.

    This class generates sinusoidal patterns that can be used with FTP, PSP and FAPS.

    Member of Structured_light

    See more

    Declaration

    Objective-C

    @interface SinusoidalPattern : StructuredLightPattern

    Swift

    class SinusoidalPattern : StructuredLightPattern
  • Parameters of SinusoidalPattern constructor width Projector’s width. height Projector’s height. nbrOfPeriods Number of period along the patterns direction. shiftValue Phase shift between two consecutive patterns. methodId Allow to choose between FTP, PSP and FAPS. nbrOfPixelsBetweenMarkers Number of pixels between two consecutive markers on the same row. setMarkers Allow to set markers on the patterns. markersLocation vector used to store markers location on the patterns.

    Member of Structured_light

    See more

    Declaration

    Objective-C

    @interface SinusoidalPatternParams : NSObject

    Swift

    class SinusoidalPatternParams : NSObject
  • Represents the dimensions of a rectangle the values of which are of type double

    See more

    Declaration

    Objective-C

    @interface Size2d : NSObject

    Swift

    class Size2d : NSObject
  • Represents the dimensions of a rectangle the values of which are of type float

    See more

    Declaration

    Objective-C

    @interface Size2f : NSObject

    Swift

    class Size2f : NSObject
  • Represents the dimensions of a rectangle the values of which are of type int

    See more

    Declaration

    Objective-C

    
    @interface Size2i : NSObject

    Swift

    class Size : NSObject
  • Main interface for all filters, that take sparse matches as an input and produce a dense per-pixel matching (optical flow) as an output.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SparseMatchInterpolator : Algorithm

    Swift

    class SparseMatchInterpolator : Algorithm
  • Base interface for sparse optical flow algorithms.

    Member of Video

    See more

    Declaration

    Objective-C

    @interface SparseOpticalFlow : Algorithm

    Swift

    class SparseOpticalFlow : Algorithm
  • Class used for calculating a sparse optical flow.

    The class can calculate an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    See

    calcOpticalFlowPyrLK

    Member of Video

    See more

    Declaration

    Objective-C

    @interface SparsePyrLKOpticalFlow : SparseOpticalFlow

    Swift

    class SparsePyrLKOpticalFlow : SparseOpticalFlow
  • Default predict collector

    Trace minimal distance with treshhold checking (that is default behavior for most predict logic)

    Member of Face

    See more

    Declaration

    Objective-C

    @interface StandardCollector : PredictCollector

    Swift

    class StandardCollector : PredictCollector
  • The class implements the keypoint detector introduced by CITE: Agrawal08, synonym of StarDetector. :

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface StarDetector : Feature2D

    Swift

    class StarDetector : Feature2D
  • Base class for statistical models in OpenCV ML.

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface StatModel : Algorithm

    Swift

    class StatModel : Algorithm
  • Class for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by K. Konolige.

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface StereoBM : StereoMatcher

    Swift

    class StereoBM : StereoMatcher
  • The base class for stereo correspondence algorithms.

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface StereoMatcher : Algorithm

    Swift

    class StereoMatcher : Algorithm
  • The class implements the modified H. Hirschmuller algorithm CITE: HH08 that differs from the original one as follows:

    • By default, the algorithm is single-pass, which means that you consider only 5 directions instead of 8. Set mode=StereoSGBM::MODE_HH in createStereoSGBM to run the full variant of the algorithm but beware that it may consume a lot of memory.
    • The algorithm matches blocks, not individual pixels. Though, setting blockSize=1 reduces the blocks to single pixels.
    • Mutual information cost function is not implemented. Instead, a simpler Birchfield-Tomasi sub-pixel metric from CITE: BT98 is used. Though, the color images are supported as well.
    • Some pre- and post- processing steps from K. Konolige algorithm StereoBM are included, for example: pre-filtering (StereoBM::PREFILTER_XSOBEL type) and post-filtering (uniqueness check, quadratic interpolation and speckle filtering).

    @note - (Python) An example illustrating the use of the StereoSGBM matching algorithm can be found at opencv_source_code/samples/python/stereo_match.py

    Member of Calib3d

    See more

    Declaration

    Objective-C

    @interface StereoSGBM : StereoMatcher

    Swift

    class StereoSGBM : StereoMatcher
  • Class implementing edge detection algorithm from CITE: Dollar2013 :

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface StructuredEdgeDetection : Algorithm

    Swift

    class StructuredEdgeDetection : Algorithm
  • Abstract base class for generating and decoding structured light patterns.

    Member of Structured_light

    See more

    Declaration

    Objective-C

    @interface StructuredLightPattern : Algorithm

    Swift

    class StructuredLightPattern : Algorithm
  • Declaration

    Objective-C

    @interface Structured_light : NSObject

    Swift

    class Structured_light : NSObject
  • The Subdiv2D module

    Member of Imgproc

    See more

    Declaration

    Objective-C

    @interface Subdiv2D : NSObject

    Swift

    class Subdiv2D : NSObject
  • Class implementing the LSC (Linear Spectral Clustering) superpixels algorithm described in CITE: LiCVPR2015LSC.

    LSC (Linear Spectral Clustering) produces compact and uniform superpixels with low computational costs. Basically, a normalized cuts formulation of the superpixel segmentation is adopted based on a similarity metric that measures the color similarity and space proximity between image pixels. LSC is of linear computational complexity and high memory efficiency and is able to preserve global properties of images

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SuperpixelLSC : Algorithm

    Swift

    class SuperpixelLSC : Algorithm
  • Class implementing the SEEDS (Superpixels Extracted via Energy-Driven Sampling) superpixels algorithm described in CITE: VBRV14 .

    The algorithm uses an efficient hill-climbing algorithm to optimize the superpixels’ energy function that is based on color histograms and a boundary term, which is optional. The energy function encourages superpixels to be of the same color, and if the boundary term is activated, the superpixels have smooth boundaries and are of similar shape. In practice it starts from a regular grid of superpixels and moves the pixels or blocks of pixels at the boundaries to refine the solution. The algorithm runs in real-time using a single CPU.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SuperpixelSEEDS : Algorithm

    Swift

    class SuperpixelSEEDS : Algorithm
  • Class implementing the SLIC (Simple Linear Iterative Clustering) superpixels algorithm described in CITE: Achanta2012.

    SLIC (Simple Linear Iterative Clustering) clusters pixels using pixel channels and image plane space to efficiently generate compact, nearly uniform superpixels. The simplicity of approach makes it extremely easy to use a lone parameter specifies the number of superpixels and the efficiency of the algorithm makes it very practical. Several optimizations are available for SLIC class: SLICO stands for “Zero parameter SLIC” and it is an optimization of baseline SLIC described in CITE: Achanta2012. MSLIC stands for “Manifold SLIC” and it is an optimization of baseline SLIC described in CITE: Liu_2017_IEEE.

    Member of Ximgproc

    See more

    Declaration

    Objective-C

    @interface SuperpixelSLIC : Algorithm

    Swift

    class SuperpixelSLIC : Algorithm
  • Synthetic frame sequence generator for testing background subtraction algorithms.

    It will generate the moving object on top of the background. It will apply some distortion to the background to make the test more complex.

    Member of Bgsegm

    See more

    Declaration

    Objective-C

    @interface SyntheticSequenceGenerator : Algorithm

    Swift

    class SyntheticSequenceGenerator : Algorithm
  • Class representing termination criteria for iterative algorithms.

    See more

    Declaration

    Objective-C

    @interface TermCriteria : NSObject

    Swift

    class TermCriteria : NSObject
  • An abstract class providing interface for text detection algorithms

    Member of Text

    See more

    Declaration

    Objective-C

    @interface TextDetector : NSObject

    Swift

    class TextDetector : NSObject
  • TextDetectorCNN class provides the functionallity of text bounding box detection. This class is representing to find bounding boxes of text words given an input image. This class uses OpenCV dnn module to load pre-trained model described in CITE: LiaoSBWL17. The original repository with the modified SSD Caffe version: https://github.com/MhLiao/TextBoxes. Model can be downloaded from DropBox. Modified .prototxt file with the model description can be found in opencv_contrib/modules/text/samples/textbox.prototxt.

    Member of Text

    See more

    Declaration

    Objective-C

    @interface TextDetectorCNN : TextDetector

    Swift

    class TextDetectorCNN : TextDetector
  • a Class to measure passing time.

    The class computes passing time by counting the number of ticks per second. That is, the following code computes the execution time in seconds: SNIPPET: snippets/core_various.cpp TickMeter_total

    It is also possible to compute the average time over multiple runs: SNIPPET: snippets/core_various.cpp TickMeter_average

    See

    getTickCount, getTickFrequency

    Member of Core

    See more

    Declaration

    Objective-C

    @interface TickMeter : NSObject

    Swift

    class TickMeter : NSObject
  • Base class for tonemapping algorithms - tools that are used to map HDR image to 8-bit range.

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface Tonemap : Algorithm

    Swift

    class Tonemap : Algorithm
  • Adaptive logarithmic mapping is a fast global tonemapping algorithm that scales the image in logarithmic domain.

    Since it’s a global operator the same function is applied to all the pixels, it is controlled by the bias parameter.

    Optional saturation enhancement is possible as described in CITE: FL02 .

    For more information see CITE: DM03 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface TonemapDrago : Tonemap

    Swift

    class TonemapDrago : Tonemap
  • This algorithm decomposes image into two layers: base layer and detail layer using bilateral filter and compresses contrast of the base layer thus preserving all the details.

    This implementation uses regular bilateral filter from OpenCV.

    Saturation enhancement is possible as in cv::TonemapDrago.

    For more information see CITE: DD02 .

    Member of Xphoto

    See more

    Declaration

    Objective-C

    @interface TonemapDurand : Tonemap

    Swift

    class TonemapDurand : Tonemap
  • This algorithm transforms image to contrast using gradients on all levels of gaussian pyramid, transforms contrast values to HVS response and scales the response. After this the image is reconstructed from new contrast values.

    For more information see CITE: MM06 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface TonemapMantiuk : Tonemap

    Swift

    class TonemapMantiuk : Tonemap
  • This is a global tonemapping operator that models human visual system.

    Mapping function is controlled by adaptation parameter, that is computed using light adaptation and color adaptation.

    For more information see CITE: RD05 .

    Member of Photo

    See more

    Declaration

    Objective-C

    @interface TonemapReinhard : Tonemap

    Swift

    class TonemapReinhard : Tonemap
  • Base abstract class for the long-term tracker:

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface Tracker : Algorithm

    Swift

    class Tracker : Algorithm
  • the Boosting tracker

    This is a real-time object tracking based on a novel on-line version of the AdaBoost algorithm. The classifier uses the surrounding background as negative examples in update step to avoid the drifting problem. The implementation is based on CITE: OLB .

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerBoosting : Tracker

    Swift

    class TrackerBoosting : Tracker
  • the CSRT tracker

    The implementation is based on CITE: Lukezic_IJCV2018 Discriminative Correlation Filter with Channel and Spatial Reliability

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerCSRT : Tracker

    Swift

    class TrackerCSRT : Tracker
  • the GOTURN (Generic Object Tracking Using Regression Networks) tracker

    GOTURN (CITE: GOTURN) is kind of trackers based on Convolutional Neural Networks (CNN). While taking all advantages of CNN trackers, GOTURN is much faster due to offline training without online fine-tuning nature. GOTURN tracker addresses the problem of single target tracking: given a bounding box label of an object in the first frame of the video, we track that object through the rest of the video. NOTE: Current method of GOTURN does not handle occlusions; however, it is fairly robust to viewpoint changes, lighting changes, and deformations. Inputs of GOTURN are two RGB patches representing Target and Search patches resized to 227x227. Outputs of GOTURN are predicted bounding box coordinates, relative to Search patch coordinate system, in format X1,Y1,X2,Y2. Original paper is here: http://davheld.github.io/GOTURN/GOTURN.pdf As long as original authors implementation: https://github.com/davheld/GOTURN#train-the-tracker Implementation of training algorithm is placed in separately here due to 3d-party dependencies: https://github.com/Auron-X/GOTURN_Training_Toolkit GOTURN architecture goturn.prototxt and trained model goturn.caffemodel are accessible on opencv_extra GitHub repository.

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerGOTURN : Tracker

    Swift

    class TrackerGOTURN : Tracker
  • the KCF (Kernelized Correlation Filter) tracker

    KCF is a novel tracking framework that utilizes properties of circulant matrix to enhance the processing speed. This tracking method is an implementation of CITE: KCF_ECCV which is extended to KCF with color-names features (CITE: KCF_CN). The original paper of KCF is available at http://www.robots.ox.ac.uk/~joao/publications/henriques_tpami2015.pdf as well as the matlab implementation. For more information about KCF with color-names features, please refer to http://www.cvl.isy.liu.se/research/objrec/visualtracking/colvistrack/index.html.

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerKCF : Tracker

    Swift

    class TrackerKCF : Tracker
  • The MIL algorithm trains a classifier in an online manner to separate the object from the background.

    Multiple Instance Learning avoids the drift problem for a robust tracking. The implementation is based on CITE: MIL .

    Original code can be found here http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerMIL : Tracker

    Swift

    class TrackerMIL : Tracker
  • the MOSSE (Minimum Output Sum of Squared %Error) tracker

    The implementation is based on CITE: MOSSE Visual Object Tracking using Adaptive Correlation Filters

    Note

    this tracker works with grayscale images, if passed bgr ones, they will get converted internally.

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerMOSSE : Tracker

    Swift

    class TrackerMOSSE : Tracker
  • the Median Flow tracker

    Implementation of a paper CITE: MedianFlow .

    The tracker is suitable for very smooth and predictable movements when object is visible throughout the whole sequence. It’s quite and accurate for this type of problems (in particular, it was shown by authors to outperform MIL). During the implementation period the code at http://www.aonsquared.co.uk/node/5, the courtesy of the author Arthur Amarra, was used for the reference purpose.

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerMedianFlow : Tracker

    Swift

    class TrackerMedianFlow : Tracker
  • the TLD (Tracking, learning and detection) tracker

    TLD is a novel tracking framework that explicitly decomposes the long-term tracking task into tracking, learning and detection.

    The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates detector’s errors and updates it to avoid these errors in the future. The implementation is based on CITE: TLD .

    The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this implementation, following authors. The tracker is supposed to be able to handle rapid motions, partial occlusions, object absence etc.

    Member of Tracking

    See more

    Declaration

    Objective-C

    @interface TrackerTLD : Tracker

    Swift

    class TrackerTLD : Tracker
  • Declaration

    Objective-C

    @interface Tracking : NSObject

    Swift

    class Tracking : NSObject
  • Class encapsulating training data.

    Please note that the class only specifies the interface of training data, but not implementation. All the statistical model classes in ml module accepts Ptr<TrainData> as parameter. In other words, you can create your own class derived from TrainData and pass smart pointer to the instance of this class into StatModel::train.

    See

    REF: ml_intro_data

    Member of Ml

    See more

    Declaration

    Objective-C

    @interface TrainData : NSObject

    Swift

    class TrainData : NSObject
  • class which provides a transient/moving areas segmentation module

    perform a locally adapted segmentation by using the retina magno input data Based on Alexandre BENOIT thesis: “Le système visuel humain au secours de la vision par ordinateur”

    3 spatio temporal filters are used:

    • a first one which filters the noise and local variations of the input motion energy
    • a second (more powerfull low pass spatial filter) which gives the neighborhood motion energy the segmentation consists in the comparison of these both outputs, if the local motion energy is higher to the neighborhood otion energy, then the area is considered as moving and is segmented
    • a stronger third low pass filter helps decision by providing a smooth information about the “motion context” in a wider area

    Member of Bioinspired

    See more

    Declaration

    Objective-C

    @interface TransientAreasSegmentationModule : Algorithm

    Swift

    class TransientAreasSegmentationModule : Algorithm
  • VGG

    Class implementing VGG (Oxford Visual Geometry Group) descriptor trained end to end using “Descriptor Learning Using Convex Optimisation” (DLCO) aparatus described in CITE: Simonyan14.

    desc type of descriptor to use, VGG::VGG_120 is default (120 dimensions float) Available types are VGG::VGG_120, VGG::VGG_80, VGG::VGG_64, VGG::VGG_48 isigma gaussian kernel value for image blur (default is 1.4f) img_normalize use image sample intensity normalization (enabled by default) use_orientation sample patterns using keypoints orientation, enabled by default scale_factor adjust the sampling window of detected keypoints to 64.0f (VGG sampling window) 6.25f is default and fits for KAZE, SURF detected keypoints window ratio 6.75f should be the scale for SIFT detected keypoints window ratio 5.00f should be the scale for AKAZE, MSD, AGAST, FAST, BRISK keypoints window ratio 0.75f should be the scale for ORB keypoints ratio

    dsc_normalize clamp descriptors to 255 and convert to uchar CV_8UC1 (disabled by default)

    Member of Xfeatures2d

    See more

    Declaration

    Objective-C

    @interface VGG : Feature2D

    Swift

    class VGG : Feature2D
  • Variational optical flow refinement

    This class implements variational refinement of the input flow field, i.e. it uses input flow to initialize the minimization of the following functional:

    E(U) = \int_{\Omega} \delta \Psi(E_I) + \gamma \Psi(E_G) + \alpha \Psi(E_S)
    , where
    E_I,E_G,E_S
    are color constancy, gradient constancy and smoothness terms respectively.
    \Psi(s^2)=\sqrt{s^2+\epsilon^2}
    is a robust penalizer to limit the influence of outliers. A complete formulation and a description of the minimization procedure can be found in CITE: Brox2004

    Member of Video

    See more

    Declaration

    Objective-C

    @interface VariationalRefinement : DenseOpticalFlow

    Swift

    class VariationalRefinement : DenseOpticalFlow
  • Declaration

    Objective-C

    @interface Video : NSObject

    Swift

    class Video : NSObject
  • Class for video capturing from video files, image sequences or cameras.

    The class provides C++ API for capturing video from cameras or for reading video files and image sequences.

    Here is how the class can be used: INCLUDE: samples/cpp/videocapture_basic.cpp

    Note

    In REF: videoio_c “C API” the black-box structure CvCapture is used instead of %VideoCapture. @note
    • (C++) A basic sample on using the %VideoCapture interface can be found at OPENCV_SOURCE_CODE/samples/cpp/videocapture_starter.cpp
    • (Python) A basic sample on using the %VideoCapture interface can be found at OPENCV_SOURCE_CODE/samples/python/video.py
    • (Python) A multi threaded video processing sample can be found at OPENCV_SOURCE_CODE/samples/python/video_threaded.py
    • (Python) %VideoCapture sample showcasing some features of the Video4Linux2 backend OPENCV_SOURCE_CODE/samples/python/video_v4l2.py

    Member of Videoio

    See more

    Declaration

    Objective-C

    @interface VideoCapture : NSObject

    Swift

    class VideoCapture : NSObject
  • Video writer class.

    The class provides C++ API for writing video files or image sequences.

    Member of Videoio

    See more

    Declaration

    Objective-C

    @interface VideoWriter : NSObject

    Swift

    class VideoWriter : NSObject
  • Declaration

    Objective-C

    @interface Videoio : NSObject

    Swift

    class Videoio : NSObject
  • The base class for auto white balance algorithms.

    Member of Xphoto

    See more

    Declaration

    Objective-C

    @interface WhiteBalancer : Algorithm

    Swift

    class WhiteBalancer : Algorithm
  • Declaration

    Objective-C

    @interface Xfeatures2d : NSObject

    Swift

    class Xfeatures2d : NSObject
  • Declaration

    Objective-C

    @interface Xphoto : NSObject

    Swift

    class Xphoto : NSObject