Classes
The following classes are available globally.
-
Utility class to wrap a
See morestd::vector<char>
Declaration
Objective-C
@interface ByteVector : NSObject
extension ByteVector : Sequence
Swift
class ByteVector : NSObject
-
Utility functions for handling CvType values
See moreDeclaration
Objective-C
@interface CvType : NSObject
Swift
class CvType : NSObject
-
Utility class to wrap a
See morestd::vector<double>
Declaration
Objective-C
@interface DoubleVector : NSObject
extension DoubleVector : Sequence
Swift
class DoubleVector : NSObject
-
Utility class to wrap a
See morestd::vector<float>
Declaration
Objective-C
@interface FloatVector : NSObject
extension FloatVector : Sequence
Swift
class FloatVector : NSObject
-
Utility class to wrap a
See morestd::vector<int>
Declaration
Objective-C
@interface IntVector : NSObject
extension IntVector : Sequence
Swift
class IntVector : NSObject
-
The class Mat represents an n-dimensional dense numerical single-channel or multi-channel array.
See moreDeclaration
Objective-C
@interface Mat : NSObject
Swift
class Mat : NSObject
-
Class implementing the AKAZE keypoint detector and descriptor extractor, described in CITE: ANB13.
AKAZE descriptors can only be used with KAZE or AKAZE keypoints. This class is thread-safe.
Note
When you need descriptors use Feature2D::detectAndCompute, which provides better performance. When using Feature2D::detect followed by Feature2D::compute scale space pyramid is computed twice.
Note
AKAZE implements T-API. When image is passed as UMat some parts of the algorithm will use OpenCL.
Note
[ANB13] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013.
Member of
See moreFeatures2d
-
Artificial Neural Networks - Multi-Layer Perceptrons.
Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.
Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.
See
REF: ml_intro_annMember of
See moreMl
-
-
This is a base class for all more or less complex algorithms in OpenCV
especially for classes of algorithms, for which there can be multiple implementations. The examples are stereo correspondence (for which there are algorithms like block matching, semi-global block matching, graph-cut etc.), background subtraction (which can be done using mixture-of-gaussians models, codebook-based algorithm etc.), optical flow (block matching, Lucas-Kanade, Horn-Schunck etc.).
Here is example of SimpleBlobDetector use in your application via Algorithm interface: SNIPPET: snippets/core_various.cpp Algorithm
Member of
See moreCore
Declaration
Objective-C
@interface Algorithm : NSObject
Swift
class Algorithm : NSObject
-
This algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations.
It is invariant to exposure, so exposure values and camera response are not necessary.
In this implementation new image regions are filled with zeros.
For more information see CITE: GW03 .
Member of
See morePhoto
-
Brute-force descriptor matcher.
For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets.
Member of
See moreFeatures2d
Declaration
Objective-C
@interface BFMatcher : DescriptorMatcher
Swift
class BFMatcher : DescriptorMatcher
-
Class to compute an image descriptor using the bag of visual words.
Such a computation consists of the following steps:
- Compute descriptors for a given image and its keypoints set.
- Find the nearest visual words from the vocabulary for each keypoint descriptor.
- Compute the bag-of-words image descriptor as is a normalized histogram of vocabulary words encountered in the image. The i-th bin of the histogram is a frequency of i-th word of the vocabulary in the given image.
Member of
See moreFeatures2d
Declaration
Objective-C
@interface BOWImgDescriptorExtractor : NSObject
Swift
class BOWImgDescriptorExtractor : NSObject
-
kmeans -based class to train visual vocabulary using the bag of visual words approach. :
Member of
See moreFeatures2d
Declaration
Objective-C
@interface BOWKMeansTrainer : BOWTrainer
Swift
class BOWKMeansTrainer : BOWTrainer
-
Abstract base class for training the bag of visual words vocabulary from a set of descriptors.
For details, see, for example, Visual Categorization with Bags of Keypoints by Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cedric Bray, 2004. :
Member of
See moreFeatures2d
Declaration
Objective-C
@interface BOWTrainer : NSObject
Swift
class BOWTrainer : NSObject
-
Class implementing the BRISK keypoint detector and descriptor extractor, described in CITE: LCS11 .
Member of
See moreFeatures2d
-
K-nearest neighbours - based Background/Foreground Segmentation Algorithm.
The class implements the K-nearest neighbours background subtraction described in CITE: Zivkovic2006 . Very efficient if number of foreground pixels is low.
Member of
See moreVideo
Declaration
Objective-C
@interface BackgroundSubtractorKNN : BackgroundSubtractor
Swift
class BackgroundSubtractorKNN : BackgroundSubtractor
-
Gaussian Mixture-based Background/Foreground Segmentation Algorithm.
The class implements the Gaussian mixture model background subtraction described in CITE: Zivkovic2004 and CITE: Zivkovic2006 .
Member of
See moreVideo
Declaration
Objective-C
@interface BackgroundSubtractorMOG2 : BackgroundSubtractor
Swift
class BackgroundSubtractorMOG2 : BackgroundSubtractor
-
The Calib3d module
Member classes:
CirclesGridFinderParameters
,StereoMatcher
,StereoBM
,StereoSGBM
Member enums:
See moreSolvePnPMethod
,HandEyeCalibrationMethod
,GridType
,UndistortTypes
Declaration
Objective-C
@interface Calib3d : NSObject
Swift
class Calib3d : NSObject
-
Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. Objective function is constructed using pixel values on the same position in all images, extra term is added to make the result smoother.
For more information see CITE: DM97 .
Member of
See morePhoto
Declaration
Objective-C
@interface CalibrateDebevec : CalibrateCRF
Swift
class CalibrateDebevec : CalibrateCRF
-
Inverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. This algorithm uses all image pixels.
For more information see CITE: RB99 .
Member of
See morePhoto
Declaration
Objective-C
@interface CalibrateRobertson : CalibrateCRF
Swift
class CalibrateRobertson : CalibrateCRF
-
This class represents high-level API for classification models.
ClassificationModel allows to set params for preprocessing input image. ClassificationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and return top-1 prediction.
Member of
See moreDnn
-
Declaration
Objective-C
@interface Converters : NSObject + (Mat*)vector_Point_to_Mat:(NSArray<Point2i*>*)pts NS_SWIFT_NAME(vector_Point_to_Mat(_:)); + (NSArray<Point2i*>*)Mat_to_vector_Point:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point(_:)); + (Mat*)vector_Point2f_to_Mat:(NSArray<Point2f*>*)pts NS_SWIFT_NAME(vector_Point2f_to_Mat(_:)); + (NSArray<Point2f*>*)Mat_to_vector_Point2f:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point2f(_:)); + (Mat*)vector_Point2d_to_Mat:(NSArray<Point2d*>*)pts NS_SWIFT_NAME(vector_Point2d_to_Mat(_:)); + (NSArray<Point2f*>*)Mat_to_vector_Point2d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point2d(_:)); + (Mat*)vector_Point3i_to_Mat:(NSArray<Point3i*>*)pts NS_SWIFT_NAME(vector_Point3i_to_Mat(_:)); + (NSArray<Point3i*>*)Mat_to_vector_Point3i:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3i(_:)); + (Mat*)vector_Point3f_to_Mat:(NSArray<Point3f*>*)pts NS_SWIFT_NAME(vector_Point3f_to_Mat(_:)); + (NSArray<Point3f*>*)Mat_to_vector_Point3f:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3f(_:)); + (Mat*)vector_Point3d_to_Mat:(NSArray<Point3d*>*)pts NS_SWIFT_NAME(vector_Point3d_to_Mat(_:)); + (NSArray<Point3d*>*)Mat_to_vector_Point3d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Point3d(_:)); + (Mat*)vector_float_to_Mat:(NSArray<NSNumber*>*)fs NS_SWIFT_NAME(vector_float_to_Mat(_:)); + (NSArray<NSNumber*>*)Mat_to_vector_float:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_float(_:)); + (Mat*)vector_uchar_to_Mat:(NSArray<NSNumber*>*)us NS_SWIFT_NAME(vector_uchar_to_Mat(_:)); + (NSArray<NSNumber*>*)Mat_to_vector_uchar:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_uchar(_:)); + (Mat*)vector_char_to_Mat:(NSArray<NSNumber*>*)cs NS_SWIFT_NAME(vector_char_to_Mat(_:)); + (NSArray<NSNumber*>*)Mat_to_vector_char:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_char(_:)); + (Mat*)vector_int_to_Mat:(NSArray<NSNumber*>*)is NS_SWIFT_NAME(vector_int_to_Mat(_:)); + (NSArray<NSNumber*>*)Mat_to_vector_int:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_int(_:)); + (Mat*)vector_Rect_to_Mat:(NSArray<Rect2i*>*)rs NS_SWIFT_NAME(vector_Rect_to_Mat(_:)); + (NSArray<Rect2i*>*)Mat_to_vector_Rect:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Rect(_:)); + (Mat*)vector_Rect2d_to_Mat:(NSArray<Rect2d*>*)rs NS_SWIFT_NAME(vector_Rect2d_to_Mat(_:)); + (NSArray<Rect2d*>*)Mat_to_vector_Rect2d:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_Rect2d(_:)); + (Mat*)vector_KeyPoint_to_Mat:(NSArray<KeyPoint*>*)kps NS_SWIFT_NAME(vector_KeyPoint_to_Mat(_:)); + (NSArray<KeyPoint*>*)Mat_to_vector_KeyPoint:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_KeyPoint(_:)); + (Mat*)vector_double_to_Mat:(NSArray<NSNumber*>*)ds NS_SWIFT_NAME(vector_double_to_Mat(_:)); + (NSArray<NSNumber*>*)Mat_to_vector_double:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_double(_:)); + (Mat*)vector_DMatch_to_Mat:(NSArray<DMatch*>*)matches NS_SWIFT_NAME(vector_DMatch_to_Mat(_:)); + (NSArray<DMatch*>*)Mat_to_vector_DMatch:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_DMatch(_:)); + (Mat*)vector_RotatedRect_to_Mat:(NSArray<RotatedRect*>*)rs NS_SWIFT_NAME(vector_RotatedRect_to_Mat(_:)); + (NSArray<RotatedRect*>*)Mat_to_vector_RotatedRect:(Mat*)mat NS_SWIFT_NAME(Mat_to_vector_RotatedRect(_:)); @end
Swift
class Converters : NSObject
-
The Core module
Member classes:
Algorithm
,TickMeter
Member enums:
See moreCode
,DecompTypes
,NormTypes
,CmpTypes
,GemmFlags
,DftFlags
,BorderTypes
,SortFlags
,CovarFlags
,KmeansFlags
,ReduceTypes
,RotateFlags
,Flags
,Flags
,FormatType
,Param
Declaration
Objective-C
@interface Core : NSObject
Swift
class Core : NSObject
-
Declaration
Objective-C
@interface CvAbstractCamera2 : NSObject @property UIDeviceOrientation currentDeviceOrientation; @property BOOL cameraAvailable; @property (nonatomic, strong) AVCaptureSession* captureSession; @property (nonatomic, strong) AVCaptureConnection* videoCaptureConnection; @property (nonatomic, readonly) BOOL running; @property (nonatomic, readonly) BOOL captureSessionLoaded; @property (nonatomic, assign) int defaultFPS; @property (nonatomic, readonly) AVCaptureVideoPreviewLayer *captureVideoPreviewLayer; @property (nonatomic, assign) AVCaptureDevicePosition defaultAVCaptureDevicePosition; @property (nonatomic, assign) AVCaptureVideoOrientation defaultAVCaptureVideoOrientation; @property (nonatomic, assign) BOOL useAVCaptureVideoPreviewLayer; @property (nonatomic, strong) NSString *const defaultAVCaptureSessionPreset; @property (nonatomic, assign) int imageWidth; @property (nonatomic, assign) int imageHeight; @property (nonatomic, strong) UIView* parentView; - (void)start; - (void)stop; - (void)switchCameras; - (id)initWithParentView:(UIView*)parent; - (void)createCaptureOutput; - (void)createVideoPreviewLayer; - (void)updateOrientation; - (void)lockFocus; - (void)unlockFocus; - (void)lockExposure; - (void)unlockExposure; - (void)lockBalance; - (void)unlockBalance; @end
Swift
class CvAbstractCamera2 : NSObject
-
////////////////////////////// CvVideoCamera ///////////////////////////////////////////
See moreDeclaration
Objective-C
@class CvVideoCamera2;
Swift
class CvVideoCamera2 : CvAbstractCamera2, AVCaptureVideoDataOutputSampleBufferDelegate
-
////////////////////////////// CvPhotoCamera ///////////////////////////////////////////
See moreDeclaration
Objective-C
@class CvPhotoCamera2;
Swift
class CvPhotoCamera2 : CvAbstractCamera2, AVCapturePhotoCaptureDelegate
-
DIS optical flow algorithm.
This class implements the Dense Inverse Search (DIS) optical flow algorithm. More details about the algorithm can be found at CITE: Kroeger2016 . Includes three presets with preselected parameters to provide reasonable trade-off between speed and quality. However, even the slowest preset is still relatively fast, use DeepFlow if you need better quality and don’t care about speed.
This implementation includes several additional features compared to the algorithm described in the paper, including spatial propagation of flow vectors (REF: getUseSpatialPropagation), as well as an option to utilize an initial flow approximation passed to REF: calc (which is, essentially, temporal propagation, if the previous frame’s flow field is passed).
Member of
See moreVideo
Declaration
Objective-C
@interface DISOpticalFlow : DenseOpticalFlow
Swift
class DISOpticalFlow : DenseOpticalFlow
-
Structure for matching: query descriptor index, train descriptor index, train image index and distance between descriptors.
See moreDeclaration
Objective-C
@interface DMatch : NSObject
Swift
class DMatch : NSObject
-
The class represents a single decision tree or a collection of decision trees.
The current public interface of the class allows user to train only a single decision tree, however the class is capable of storing multiple decision trees and using them for prediction (by summing responses or using a voting schemes), and the derived from DTrees classes (such as RTrees and Boost) use this capability to implement decision tree ensembles.
See
REF: ml_intro_treesMember of
See moreMl
-
Abstract base class for matching keypoint descriptors.
It has two groups of match methods: for matching descriptors of an image with another image or with an image set.
Member of
See moreFeatures2d
-
This class represents high-level API for object detection networks.
DetectionModel allows to set params for preprocessing input image. DetectionModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and return result detections. For DetectionModel SSD, Faster R-CNN, YOLO topologies are supported.
Member of
See moreDnn
-
This struct stores the scalar value (or array) of one of the following type: double, cv::String or int64. TODO: Maybe int64 is useless because double type exactly stores at least 2^52 integers.
Member of
See moreDnn
Declaration
Objective-C
@interface DictValue : NSObject
Swift
class DictValue : NSObject
-
The Dnn module
Member classes:
See moreDictValue
,Layer
,Net
,Model
,ClassificationModel
,KeypointsModel
,SegmentationModel
,DetectionModel
Declaration
Objective-C
@interface Dnn : NSObject
Swift
class Dnn : NSObject
-
Simple wrapper for a vector of two
See moredouble
Declaration
Objective-C
@interface Double2 : NSObject
Swift
class Double2 : NSObject
-
Simple wrapper for a vector of three
See moredouble
Declaration
Objective-C
@interface Double3 : NSObject
Swift
class Double3 : NSObject
-
Class computing a dense optical flow using the Gunnar Farneback’s algorithm.
Member of
See moreVideo
Declaration
Objective-C
@interface FarnebackOpticalFlow : DenseOpticalFlow
Swift
class FarnebackOpticalFlow : DenseOpticalFlow
-
-
Abstract base class for 2D image feature detectors and descriptor extractors
Member of
See moreFeatures2d
-
The Features2d module
Member classes:
Feature2D
,SIFT
,BRISK
,ORB
,MSER
,FastFeatureDetector
,AgastFeatureDetector
,GFTTDetector
,SimpleBlobDetector
,Params
,KAZE
,AKAZE
,DescriptorMatcher
,BFMatcher
,FlannBasedMatcher
,BOWTrainer
,BOWKMeansTrainer
,BOWImgDescriptorExtractor
Member enums:
See moreScoreType
,FastDetectorType
,AgastDetectorType
,DiffusivityType
,DescriptorType
,MatcherType
,DrawMatchesFlags
Declaration
Objective-C
@interface Features2d : NSObject
Swift
class Features2d : NSObject
-
Flann-based descriptor matcher.
This matcher trains cv::flann::Index on a train descriptor collection and calls its nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. FlannBasedMatcher does not support masking permissible matches of descriptor sets because flann::Index does not support this. :
Member of
See moreFeatures2d
Declaration
Objective-C
@interface FlannBasedMatcher : DescriptorMatcher
Swift
class FlannBasedMatcher : DescriptorMatcher
-
Simple wrapper for a vector of four
See morefloat
Declaration
Objective-C
@interface Float4 : NSObject
Swift
class Float4 : NSObject
-
Simple wrapper for a vector of six
See morefloat
Declaration
Objective-C
@interface Float6 : NSObject
Swift
class Float6 : NSObject
-
Wrapping class for feature detection using the goodFeaturesToTrack function. :
Member of
See moreFeatures2d
-
finds arbitrary template in the grayscale image using Generalized Hough Transform
Detects position only without translation and rotation CITE: Ballard1981 .
Member of
See moreImgproc
Declaration
Objective-C
@interface GeneralizedHoughBallard : GeneralizedHough
Swift
class GeneralizedHoughBallard : GeneralizedHough
-
finds arbitrary template in the grayscale image using Generalized Hough Transform
Detects position, translation and rotation CITE: Guil1999 .
Member of
See moreImgproc
Declaration
Objective-C
@interface GeneralizedHoughGuil : GeneralizedHough
Swift
class GeneralizedHoughGuil : GeneralizedHough
-
Implementation of HOG (Histogram of Oriented Gradients) descriptor and object detector.
the HOG descriptor algorithm introduced by Navneet Dalal and Bill Triggs CITE: Dalal2005 .
useful links:
https://hal.inria.fr/inria-00548512/document/
https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients
https://software.intel.com/en-us/ipp-dev-reference-histogram-of-oriented-gradients-hog-descriptor
http://www.learnopencv.com/histogram-of-oriented-gradients
http://www.learnopencv.com/handwritten-digits-classification-an-opencv-c-python-tutorial
Member of
See moreObjdetect
Declaration
Objective-C
@interface HOGDescriptor : NSObject
Swift
class HOGDescriptor : NSObject
-
The Imgcodecs module
Member of
See moreImgcodecs
Member enums:ImreadModes
,ImwriteFlags
,ImwriteEXRTypeFlags
,ImwritePNGFlags
,ImwritePAMFlags
Declaration
Objective-C
@interface Imgcodecs : NSObject
Swift
class Imgcodecs : NSObject
-
The Imgproc module
Member classes:
GeneralizedHough
,GeneralizedHoughBallard
,GeneralizedHoughGuil
,CLAHE
,Subdiv2D
,LineSegmentDetector
Member enums:
See moreSmoothMethod_c
,MorphShapes_c
,SpecialFilter
,MorphTypes
,MorphShapes
,InterpolationFlags
,WarpPolarMode
,InterpolationMasks
,DistanceTypes
,DistanceTransformMasks
,ThresholdTypes
,AdaptiveThresholdTypes
,GrabCutClasses
,GrabCutModes
,DistanceTransformLabelTypes
,FloodFillFlags
,ConnectedComponentsTypes
,ConnectedComponentsAlgorithmsTypes
,RetrievalModes
,ContourApproximationModes
,ShapeMatchModes
,HoughModes
,LineSegmentDetectorModes
,HistCompMethods
,ColorConversionCodes
,RectanglesIntersectTypes
,LineTypes
,HersheyFonts
,MarkerTypes
,TemplateMatchModes
,ColormapTypes
Declaration
Objective-C
@interface Imgproc : NSObject
Swift
class Imgproc : NSObject
-
Simple wrapper for a vector of four
See moreint
Declaration
Objective-C
@interface Int4 : NSObject
Swift
class Int4 : NSObject
-
Class implementing the KAZE keypoint detector and descriptor extractor, described in CITE: ABD12 .
Note
AKAZE descriptor can only be used with KAZE or AKAZE keypoints .. [ABD12] KAZE Features. Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. In European Conference on Computer Vision (ECCV), Fiorenze, Italy, October 2012.Member of
See moreFeatures2d
-
Kalman filter class.
The class implements a standard Kalman filter http://en.wikipedia.org/wiki/Kalman_filter, CITE: Welch95 . However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality.
Note
In C API when CvKalman* kalmanFilter structure is not needed anymore, it should be released with cvReleaseKalman(&kalmanFilter)Member of
See moreVideo
Declaration
Objective-C
@interface KalmanFilter : NSObject
Swift
class KalmanFilter : NSObject
-
Object representing a point feature found by one of many available keypoint detectors, such as Harris corner detector, FAST, StarDetector, SURF, SIFT etc.
See moreDeclaration
Objective-C
@interface KeyPoint : NSObject
Swift
class KeyPoint : NSObject
-
This class represents high-level API for keypoints models
KeypointsModel allows to set params for preprocessing input image. KeypointsModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the x and y coordinates of each detected keypoint
Member of
See moreDnn
-
This interface class allows to build new Layers - are building blocks of networks.
Each class, derived from Layer, must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of REF: dnnLayerFactory “LayerFactory” macros.
Member of
See moreDnn
-
Maximally stable extremal region extractor
The class encapsulates all the parameters of the %MSER extraction algorithm (see wiki article).
there are two different implementation of %MSER: one for grey image, one for color image
the grey image algorithm is taken from: CITE: nister2008linear ; the paper claims to be faster than union-find method; it actually get 1.5~2m/s on my centrino L7200 1.2GHz laptop.
the color image algorithm is taken from: CITE: forssen2007maximally ; it should be much slower than grey image method ( 3~4 times ); the chi_table.h file is taken directly from paper’s source code which is distributed under GPL.
(Python) A complete example showing the use of the %MSER detector can be found at samples/python/mser.py
Member of
See moreFeatures2d
-
The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response.
For more information see CITE: DM97 .
Member of
See morePhoto
Declaration
Objective-C
@interface MergeDebevec : MergeExposures
Swift
class MergeDebevec : MergeExposures
-
Pixels are weighted using contrast, saturation and well-exposedness measures, than images are combined using laplacian pyramids.
The resulting image weight is constructed as weighted average of contrast, saturation and well-exposedness measures.
The resulting image doesn’t require tonemapping and can be converted to 8-bit image by multiplying by 255, but it’s recommended to apply gamma correction and/or linear tonemapping.
For more information see CITE: MK07 .
Member of
See morePhoto
Declaration
Objective-C
@interface MergeMertens : MergeExposures
Swift
class MergeMertens : MergeExposures
-
The resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response.
For more information see CITE: RB99 .
Member of
See morePhoto
Declaration
Objective-C
@interface MergeRobertson : MergeExposures
Swift
class MergeRobertson : MergeExposures
-
Result of operation to determine global minimum and maximum of an array
See moreDeclaration
Objective-C
@interface MinMaxLocResult : NSObject
Swift
class MinMaxLocResult : NSObject
-
The Ml module
Member classes:
ParamGrid
,TrainData
,StatModel
,NormalBayesClassifier
,KNearest
,SVM
,EM
,DTrees
,RTrees
,Boost
,ANN_MLP
,LogisticRegression
,SVMSGD
Member enums:
VariableTypes
,ErrorTypes
,SampleTypes
,StatModelFlags
,KNearestTypes
,SVMTypes
,KernelTypes
,ParamTypes
,EMTypes
,DTreeFlags
,Types
,TrainingMethods
,ActivationFunctions
,TrainFlags
,RegKinds
,Methods
,SvmsgdType
,MarginType
Declaration
Objective-C
@interface Ml : NSObject
Swift
class Ml : NSObject
-
Declaration
Objective-C
@interface Moments : NSObject @property double m00; @property double m10; @property double m01; @property double m20; @property double m11; @property double m02; @property double m30; @property double m21; @property double m12; @property double m03; @property double mu20; @property double mu11; @property double mu02; @property double mu30; @property double mu21; @property double mu12; @property double mu03; @property double nu20; @property double nu11; @property double nu02; @property double nu30; @property double nu21; @property double nu12; @property double nu03; #ifdef __cplusplus @property(readonly) cv::Moments& nativeRef; #endif -(instancetype)initWithM00:(double)m00 m10:(double)m10 m01:(double)m01 m20:(double)m20 m11:(double)m11 m02:(double)m02 m30:(double)m30 m21:(double)m21 m12:(double)m12 m03:(double)m03; -(instancetype)init; -(instancetype)initWithVals:(NSArray<NSNumber*>*)vals; #ifdef __cplusplus +(instancetype)fromNative:(cv::Moments&)moments; #endif -(void)set:(NSArray<NSNumber*>*)vals; -(void)completeState; -(NSString *)description; @end
Swift
class Moments : NSObject
-
This class allows to create and manipulate comprehensive artificial neural networks.
Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.
Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.
This class supports reference counting of its instances, i. e. copies point to the same instance.
Member of
See moreDnn
Declaration
Objective-C
@interface Net : NSObject
Swift
class Net : NSObject
-
Class implementing the ORB (oriented BRIEF) keypoint detector and descriptor extractor
described in CITE: RRKB11 . The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).
Member of
See moreFeatures2d
-
The Objdetect module
Member classes:
BaseCascadeClassifier
,CascadeClassifier
,HOGDescriptor
,QRCodeDetector
Member enums:
See moreHistogramNormType
,DescriptorStorageFormat
,ObjectStatus
Declaration
Objective-C
@interface Objdetect : NSObject
Swift
class Objdetect : NSObject
-
The structure represents the logarithmic grid range of statmodel parameters.
It is used for optimizing statmodel accuracy by varying model parameters, the accuracy estimate being computed by cross-validation.
Member of
See moreMl
Declaration
Objective-C
@interface ParamGrid : NSObject
Swift
class ParamGrid : NSObject
-
Declaration
Objective-C
@interface Params : NSObject
Swift
class Params : NSObject
-
The Photo module
Member classes:
See moreTonemap
,TonemapDrago
,TonemapReinhard
,TonemapMantiuk
,AlignExposures
,AlignMTB
,CalibrateCRF
,CalibrateDebevec
,CalibrateRobertson
,MergeExposures
,MergeDebevec
,MergeMertens
,MergeRobertson
Declaration
Objective-C
@interface Photo : NSObject
Swift
class Photo : NSObject
-
Represents a two dimensional point the coordinate values of which are of type
See moredouble
Declaration
Objective-C
@interface Point2d : NSObject
Swift
class Point2d : NSObject
-
Represents a two dimensional point the coordinate values of which are of type
See morefloat
Declaration
Objective-C
@interface Point2f : NSObject
Swift
class Point2f : NSObject
-
Represents a two dimensional point the coordinate values of which are of type
See moreint
Declaration
Objective-C
@interface Point2i : NSObject
Swift
class Point : NSObject
-
Represents a three dimensional point the coordinate values of which are of type
See moredouble
Declaration
Objective-C
@interface Point3d : NSObject
Swift
class Point3d : NSObject
-
Represents a three dimensional point the coordinate values of which are of type
See morefloat
Declaration
Objective-C
@interface Point3f : NSObject
Swift
class Point3f : NSObject
-
Represents a three dimensional point the coordinate values of which are of type
See moreint
Declaration
Objective-C
@interface Point3i : NSObject
Swift
class Point3i : NSObject
-
Groups the object candidate rectangles. rectList Input/output vector of rectangles. Output vector includes retained and grouped rectangles. (The Python list is not modified in place.) weights Input/output vector of weights of rectangles. Output vector includes weights of retained and grouped rectangles. (The Python list is not modified in place.) groupThreshold Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. eps Relative difference between sides of the rectangles to merge them into a group.
Member of
See moreObjdetect
Declaration
Objective-C
@interface QRCodeDetector : NSObject
Swift
class QRCodeDetector : NSObject
-
Represents a range of dimension indices
See moreDeclaration
Objective-C
@interface Range : NSObject
Swift
class Range : NSObject
-
Represents a rectange the coordinate and dimension values of which are of type
See moredouble
Declaration
Objective-C
@interface Rect2d : NSObject
Swift
class Rect2d : NSObject
-
Represents a rectange the coordinate and dimension values of which are of type
See morefloat
Declaration
Objective-C
@interface Rect2f : NSObject
Swift
class Rect2f : NSObject
-
Represents a rectange the coordinate and dimension values of which are of type
See moreint
Declaration
Objective-C
@interface Rect2i : NSObject
Swift
class Rect : NSObject
-
Represents a rotated rectangle on a plane
See moreDeclaration
Objective-C
@interface RotatedRect : NSObject
Swift
class RotatedRect : NSObject
-
Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm by D. Lowe CITE: Lowe04 .
Member of
See moreFeatures2d
-
Represents a four element vector
See moreDeclaration
Objective-C
@interface Scalar : NSObject
Swift
class Scalar : NSObject
-
This class represents high-level API for segmentation models
SegmentationModel allows to set params for preprocessing input image. SegmentationModel creates net from file with trained weights and config, sets preprocessing input, runs forward pass and returns the class prediction for each pixel.
Member of
See moreDnn
-
Class for extracting blobs from an image. :
The class implements a simple algorithm for extracting blobs from an image:
- Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds.
- Extract connected components from every binary image by findContours and calculate their centers.
- Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter.
- From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.
This class performs several filtrations of returned blobs. You should set filterBy* to true/false to turn on/off corresponding filtration. Available filtrations:
- By color. This filter compares the intensity of a binary image at the center of a blob to blobColor. If they differ, the blob is filtered out. Use blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.
- By area. Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
- By circularity. Extracted blobs have circularity
(\frac{4*\pi*Area}{perimeter * perimeter}) between minCircularity (inclusive) and maxCircularity (exclusive).
- By ratio of the minimum inertia to maximum inertia. Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).
- By convexity. Extracted blobs have convexity (area / area of blob convex hull) between minConvexity (inclusive) and maxConvexity (exclusive).
Default values of parameters are tuned to extract dark circular blobs.
Member of
See moreFeatures2d
-
Represents the dimensions of a rectangle the values of which are of type
See moredouble
Declaration
Objective-C
@interface Size2d : NSObject
Swift
class Size2d : NSObject
-
Represents the dimensions of a rectangle the values of which are of type
See morefloat
Declaration
Objective-C
@interface Size2f : NSObject
Swift
class Size2f : NSObject
-
Represents the dimensions of a rectangle the values of which are of type
See moreint
Declaration
Objective-C
@interface Size2i : NSObject
Swift
class Size : NSObject
-
Class used for calculating a sparse optical flow.
The class can calculate an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.
See
calcOpticalFlowPyrLKMember of
See moreVideo
Declaration
Objective-C
@interface SparsePyrLKOpticalFlow : SparseOpticalFlow
Swift
class SparsePyrLKOpticalFlow : SparseOpticalFlow
-
The class implements the modified H. Hirschmuller algorithm CITE: HH08 that differs from the original one as follows:
- By default, the algorithm is single-pass, which means that you consider only 5 directions instead of 8. Set mode=StereoSGBM::MODE_HH in createStereoSGBM to run the full variant of the algorithm but beware that it may consume a lot of memory.
- The algorithm matches blocks, not individual pixels. Though, setting blockSize=1 reduces the blocks to single pixels.
- Mutual information cost function is not implemented. Instead, a simpler Birchfield-Tomasi sub-pixel metric from CITE: BT98 is used. Though, the color images are supported as well.
- Some pre- and post- processing steps from K. Konolige algorithm StereoBM are included, for example: pre-filtering (StereoBM::PREFILTER_XSOBEL type) and post-filtering (uniqueness check, quadratic interpolation and speckle filtering).
@note - (Python) An example illustrating the use of the StereoSGBM matching algorithm can be found at opencv_source_code/samples/python/stereo_match.py
Member of
See moreCalib3d
Declaration
Objective-C
@interface StereoSGBM : StereoMatcher
Swift
class StereoSGBM : StereoMatcher
-
Class representing termination criteria for iterative algorithms.
See moreDeclaration
Objective-C
@interface TermCriteria : NSObject
Swift
class TermCriteria : NSObject
-
a Class to measure passing time.
The class computes passing time by counting the number of ticks per second. That is, the following code computes the execution time in seconds: SNIPPET: snippets/core_various.cpp TickMeter_total
It is also possible to compute the average time over multiple runs: SNIPPET: snippets/core_various.cpp TickMeter_average
See
getTickCount, getTickFrequencyMember of
See moreCore
Declaration
Objective-C
@interface TickMeter : NSObject
Swift
class TickMeter : NSObject
-
Adaptive logarithmic mapping is a fast global tonemapping algorithm that scales the image in logarithmic domain.
Since it’s a global operator the same function is applied to all the pixels, it is controlled by the bias parameter.
Optional saturation enhancement is possible as described in CITE: FL02 .
For more information see CITE: DM03 .
Member of
See morePhoto
-
Class encapsulating training data.
Please note that the class only specifies the interface of training data, but not implementation. All the statistical model classes in ml module accepts Ptr<TrainData> as parameter. In other words, you can create your own class derived from TrainData and pass smart pointer to the instance of this class into StatModel::train.
See
REF: ml_intro_dataMember of
See moreMl
Declaration
Objective-C
@interface TrainData : NSObject
Swift
class TrainData : NSObject
-
Variational optical flow refinement
This class implements variational refinement of the input flow field, i.e. it uses input flow to initialize the minimization of the following functional:
E(U) = \int_{\Omega} \delta \Psi(E_I) + \gamma \Psi(E_G) + \alpha \Psi(E_S), whereE_I,E_G,E_Sare color constancy, gradient constancy and smoothness terms respectively.\Psi(s^2)=\sqrt{s^2+\epsilon^2}is a robust penalizer to limit the influence of outliers. A complete formulation and a description of the minimization procedure can be found in CITE: Brox2004Member of
See moreVideo
Declaration
Objective-C
@interface VariationalRefinement : DenseOpticalFlow
Swift
class VariationalRefinement : DenseOpticalFlow
-
The Video module
Member classes:
See moreKalmanFilter
,DenseOpticalFlow
,SparseOpticalFlow
,FarnebackOpticalFlow
,VariationalRefinement
,DISOpticalFlow
,SparsePyrLKOpticalFlow
,BackgroundSubtractor
,BackgroundSubtractorMOG2
,BackgroundSubtractorKNN
Declaration
Objective-C
@interface Video : NSObject
Swift
class Video : NSObject
-
Class for video capturing from video files, image sequences or cameras.
The class provides C++ API for capturing video from cameras or for reading video files and image sequences.
Here is how the class can be used: INCLUDE: samples/cpp/videocapture_basic.cpp
Note
In REF: videoio_c “C API” the black-box structureCvCapture
is used instead of %VideoCapture. @note- (C++) A basic sample on using the %VideoCapture interface can be found at
OPENCV_SOURCE_CODE/samples/cpp/videocapture_starter.cpp
- (Python) A basic sample on using the %VideoCapture interface can be found at
OPENCV_SOURCE_CODE/samples/python/video.py
- (Python) A multi threaded video processing sample can be found at
OPENCV_SOURCE_CODE/samples/python/video_threaded.py
- (Python) %VideoCapture sample showcasing some features of the Video4Linux2 backend
OPENCV_SOURCE_CODE/samples/python/video_v4l2.py
Member of
See moreVideoio
Declaration
Objective-C
@interface VideoCapture : NSObject
Swift
class VideoCapture : NSObject
- (C++) A basic sample on using the %VideoCapture interface can be found at
-
The Videoio module
Member classes:
VideoCapture
,VideoWriter
Member enums:
See moreVideoCaptureAPIs
,VideoCaptureProperties
,VideoWriterProperties
Declaration
Objective-C
@interface Videoio : NSObject
Swift
class Videoio : NSObject