Video

Objective-C

@interface Video : NSObject

Swift

class Video : NSObject

The Video module

Member classes: KalmanFilter, DenseOpticalFlow, SparseOpticalFlow, FarnebackOpticalFlow, VariationalRefinement, DISOpticalFlow, SparsePyrLKOpticalFlow, BackgroundSubtractor, BackgroundSubtractorMOG2, BackgroundSubtractorKNN

Class Constants

  • Declaration

    Objective-C

    @property (class, readonly) int OPTFLOW_USE_INITIAL_FLOW

    Swift

    class var OPTFLOW_USE_INITIAL_FLOW: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int OPTFLOW_LK_GET_MIN_EIGENVALS

    Swift

    class var OPTFLOW_LK_GET_MIN_EIGENVALS: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int OPTFLOW_FARNEBACK_GAUSSIAN

    Swift

    class var OPTFLOW_FARNEBACK_GAUSSIAN: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int MOTION_TRANSLATION

    Swift

    class var MOTION_TRANSLATION: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int MOTION_EUCLIDEAN

    Swift

    class var MOTION_EUCLIDEAN: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int MOTION_AFFINE

    Swift

    class var MOTION_AFFINE: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int MOTION_HOMOGRAPHY

    Swift

    class var MOTION_HOMOGRAPHY: Int32 { get }

Methods

  • Read a .flo file

    The function readOpticalFlow loads a flow field from a file and returns it as a single matrix. Resulting Mat has a type CV_32FC2 - floating-point, 2-channel. First channel corresponds to the flow in the horizontal direction (u), second - vertical (v).

    Declaration

    Objective-C

    + (nonnull Mat *)readOpticalFlow:(nonnull NSString *)path;

    Swift

    class func readOpticalFlow(path: String) -> Mat

    Parameters

    path

    Path to the file to be loaded

  • Creates KNN Background Subtractor

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorKNN *)
        createBackgroundSubtractorKNN:(int)history
                       dist2Threshold:(double)dist2Threshold
                        detectShadows:(BOOL)detectShadows;

    Swift

    class func createBackgroundSubtractorKNN(history: Int32, dist2Threshold: Double, detectShadows: Bool) -> BackgroundSubtractorKNN

    Parameters

    history

    Length of the history.

    dist2Threshold

    Threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to that sample. This parameter does not affect the background update.

    detectShadows

    If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false.

  • Creates KNN Background Subtractor

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorKNN *)createBackgroundSubtractorKNN:(int)history
                                                        dist2Threshold:
                                                            (double)dist2Threshold;

    Swift

    class func createBackgroundSubtractorKNN(history: Int32, dist2Threshold: Double) -> BackgroundSubtractorKNN

    Parameters

    history

    Length of the history.

    dist2Threshold

    Threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to that sample. This parameter does not affect the background update. speed a bit, so if you do not need this feature, set the parameter to false.

  • Creates KNN Background Subtractor

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorKNN *)createBackgroundSubtractorKNN:(int)history;

    Swift

    class func createBackgroundSubtractorKNN(history: Int32) -> BackgroundSubtractorKNN

    Parameters

    history

    Length of the history. whether a pixel is close to that sample. This parameter does not affect the background update. speed a bit, so if you do not need this feature, set the parameter to false.

  • Creates KNN Background Subtractor

    whether a pixel is close to that sample. This parameter does not affect the background update. speed a bit, so if you do not need this feature, set the parameter to false.

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorKNN *)createBackgroundSubtractorKNN;

    Swift

    class func createBackgroundSubtractorKNN() -> BackgroundSubtractorKNN
  • Creates MOG2 Background Subtractor

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorMOG2 *)
        createBackgroundSubtractorMOG2:(int)history
                          varThreshold:(double)varThreshold
                         detectShadows:(BOOL)detectShadows;

    Swift

    class func createBackgroundSubtractorMOG2(history: Int32, varThreshold: Double, detectShadows: Bool) -> BackgroundSubtractorMOG2

    Parameters

    history

    Length of the history.

    varThreshold

    Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update.

    detectShadows

    If true, the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to false.

  • Creates MOG2 Background Subtractor

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorMOG2 *)
        createBackgroundSubtractorMOG2:(int)history
                          varThreshold:(double)varThreshold;

    Swift

    class func createBackgroundSubtractorMOG2(history: Int32, varThreshold: Double) -> BackgroundSubtractorMOG2

    Parameters

    history

    Length of the history.

    varThreshold

    Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update. speed a bit, so if you do not need this feature, set the parameter to false.

  • Creates MOG2 Background Subtractor

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorMOG2 *)createBackgroundSubtractorMOG2:
        (int)history;

    Swift

    class func createBackgroundSubtractorMOG2(history: Int32) -> BackgroundSubtractorMOG2

    Parameters

    history

    Length of the history. to decide whether a pixel is well described by the background model. This parameter does not affect the background update. speed a bit, so if you do not need this feature, set the parameter to false.

  • Creates MOG2 Background Subtractor

    to decide whether a pixel is well described by the background model. This parameter does not affect the background update. speed a bit, so if you do not need this feature, set the parameter to false.

    Declaration

    Objective-C

    + (nonnull BackgroundSubtractorMOG2 *)createBackgroundSubtractorMOG2;

    Swift

    class func createBackgroundSubtractorMOG2() -> BackgroundSubtractorMOG2
  • Finds an object center, size, and orientation.

    See the OpenCV sample camshiftdemo.c that tracks colored objects.

    @note

    • (Python) A sample explaining the camshift tracking algorithm can be found at opencv_source_code/samples/python/camshift.py

    Declaration

    Objective-C

    + (nonnull RotatedRect *)CamShift:(nonnull Mat *)probImage
                               window:(nonnull Rect2i *)window
                             criteria:(nonnull TermCriteria *)criteria;

    Swift

    class func CamShift(probImage: Mat, window: Rect2i, criteria: TermCriteria) -> RotatedRect

    Parameters

    probImage

    Back projection of the object histogram. See calcBackProject.

    window

    Initial search window.

    criteria

    Stop criteria for the underlying meanShift. returns (in old interfaces) Number of iterations CAMSHIFT took to converge The function implements the CAMSHIFT object tracking algorithm CITE: Bradski98 . First, it finds an object center using meanShift and then adjusts the window size and finds the optimal rotation. The function returns the rotated rectangle structure that includes the object position, size, and orientation. The next position of the search window can be obtained with RotatedRect::boundingRect()

  • Write a .flo to disk

    The function stores a flow field in a file, returns true on success, false otherwise. The flow field must be a 2-channel, floating-point matrix (CV_32FC2). First channel corresponds to the flow in the horizontal direction (u), second - vertical (v).

    Declaration

    Objective-C

    + (BOOL)writeOpticalFlow:(nonnull NSString *)path flow:(nonnull Mat *)flow;

    Swift

    class func writeOpticalFlow(path: String, flow: Mat) -> Bool

    Parameters

    path

    Path to the file to be written

    flow

    Flow field to be stored

  • Computes the Enhanced Correlation Coefficient value between two images CITE: EP08 .

    @sa findTransformECC

    Declaration

    Objective-C

    + (double)computeECC:(nonnull Mat *)templateImage
              inputImage:(nonnull Mat *)inputImage
               inputMask:(nonnull Mat *)inputMask;

    Swift

    class func computeECC(templateImage: Mat, inputImage: Mat, inputMask: Mat) -> Double

    Parameters

    templateImage

    single-channel template image; CV_8U or CV_32F array.

    inputImage

    single-channel input image to be warped to provide an image similar to templateImage, same type as templateImage.

    inputMask

    An optional mask to indicate valid values of inputImage.

  • Computes the Enhanced Correlation Coefficient value between two images CITE: EP08 .

    @sa findTransformECC

    Declaration

    Objective-C

    + (double)computeECC:(nonnull Mat *)templateImage
              inputImage:(nonnull Mat *)inputImage;

    Swift

    class func computeECC(templateImage: Mat, inputImage: Mat) -> Double

    Parameters

    templateImage

    single-channel template image; CV_8U or CV_32F array.

    inputImage

    single-channel input image to be warped to provide an image similar to templateImage, same type as templateImage.

  • Finds the geometric transform (warp) between two images in terms of the ECC criterion CITE: EP08 .

    The function estimates the optimum transformation (warpMatrix) with respect to ECC criterion (CITE: EP08), that is

    \texttt{warpMatrix} = \arg\max_{W} \texttt{ECC}(\texttt{templateImage}(x,y),\texttt{inputImage}(x’,y’))

    where

    \begin{bmatrix} x’ \ y’ \end{bmatrix} = W \cdot \begin{bmatrix} x \ y \ 1 \end{bmatrix}

    (the equation holds with homogeneous coordinates for homography). It returns the final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image. When a

    3\times 3
    matrix is given with motionType =0, 1 or 2, the third row is ignored.

    Unlike findHomography and estimateRigidTransform, the function findTransformECC implements an area-based alignment that builds on intensity similarities. In essence, the function updates the initial transformation that roughly aligns the images. If this information is missing, the identity warp (unity matrix) is used as an initialization. Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately). Use inverse warping in the second image to take an image close to the first one, i.e. use the flag WARP_INVERSE_MAP with warpAffine or warpPerspective. See also the OpenCV sample image_alignment.cpp that demonstrates the use of the function. Note that the function throws an exception if algorithm does not converges.

    @sa computeECC, estimateAffine2D, estimateAffinePartial2D, findHomography

    Declaration

    Objective-C

    + (double)findTransformECC:(nonnull Mat *)templateImage
                    inputImage:(nonnull Mat *)inputImage
                    warpMatrix:(nonnull Mat *)warpMatrix
                    motionType:(int)motionType
                      criteria:(nonnull TermCriteria *)criteria
                     inputMask:(nonnull Mat *)inputMask
                 gaussFiltSize:(int)gaussFiltSize;

    Swift

    class func findTransformECC(templateImage: Mat, inputImage: Mat, warpMatrix: Mat, motionType: Int32, criteria: TermCriteria, inputMask: Mat, gaussFiltSize: Int32) -> Double

    Parameters

    templateImage

    single-channel template image; CV_8U or CV_32F array.

    inputImage

    single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as templateImage.

    warpMatrix

    floating-point

    2\times 3
    or
    3\times 3
    mapping matrix (warp).

    motionType

    parameter, specifying the type of motion:

    • MOTION_TRANSLATION sets a translational motion model; warpMatrix is
      2\times 3
      with the first
      2\times 2
      part being the unity matrix and the rest two parameters being estimated.
    • MOTION_EUCLIDEAN sets a Euclidean (rigid) transformation as motion model; three parameters are estimated; warpMatrix is
      2\times 3
      .
    • MOTION_AFFINE sets an affine motion model (DEFAULT); six parameters are estimated; warpMatrix is
      2\times 3
      .
    • MOTION_HOMOGRAPHY sets a homography as a motion model; eight parameters are estimated;`warpMatrix` is
      3\times 3
      .

    criteria

    parameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations (a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above.

    inputMask

    An optional mask to indicate valid values of inputImage.

    gaussFiltSize

    An optional value indicating size of gaussian blur filter; (DEFAULT: 5)

  • Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

    Declaration

    Objective-C

    + (int)buildOpticalFlowPyramid:(nonnull Mat *)img
                           pyramid:(nonnull NSMutableArray<Mat *> *)pyramid
                           winSize:(nonnull Size2i *)winSize
                          maxLevel:(int)maxLevel
                   withDerivatives:(BOOL)withDerivatives
                         pyrBorder:(int)pyrBorder
                       derivBorder:(int)derivBorder
                tryReuseInputImage:(BOOL)tryReuseInputImage;

    Swift

    class func buildOpticalFlowPyramid(img: Mat, pyramid: NSMutableArray, winSize: Size2i, maxLevel: Int32, withDerivatives: Bool, pyrBorder: Int32, derivBorder: Int32, tryReuseInputImage: Bool) -> Int32

    Parameters

    img

    8-bit input image.

    pyramid

    output pyramid.

    winSize

    window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.

    maxLevel

    0-based maximal pyramid level number.

    withDerivatives

    set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally.

    pyrBorder

    the border mode for pyramid layers.

    derivBorder

    the border mode for gradients.

    tryReuseInputImage

    put ROI of input image into the pyramid if possible. You can pass false to force data copying.

    Return Value

    number of levels in constructed pyramid. Can be less than maxLevel.

  • Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

    Declaration

    Objective-C

    + (int)buildOpticalFlowPyramid:(nonnull Mat *)img
                           pyramid:(nonnull NSMutableArray<Mat *> *)pyramid
                           winSize:(nonnull Size2i *)winSize
                          maxLevel:(int)maxLevel
                   withDerivatives:(BOOL)withDerivatives
                         pyrBorder:(int)pyrBorder
                       derivBorder:(int)derivBorder;

    Swift

    class func buildOpticalFlowPyramid(img: Mat, pyramid: NSMutableArray, winSize: Size2i, maxLevel: Int32, withDerivatives: Bool, pyrBorder: Int32, derivBorder: Int32) -> Int32

    Parameters

    img

    8-bit input image.

    pyramid

    output pyramid.

    winSize

    window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.

    maxLevel

    0-based maximal pyramid level number.

    withDerivatives

    set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally.

    pyrBorder

    the border mode for pyramid layers.

    derivBorder

    the border mode for gradients. to force data copying.

    Return Value

    number of levels in constructed pyramid. Can be less than maxLevel.

  • Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

    Declaration

    Objective-C

    + (int)buildOpticalFlowPyramid:(nonnull Mat *)img
                           pyramid:(nonnull NSMutableArray<Mat *> *)pyramid
                           winSize:(nonnull Size2i *)winSize
                          maxLevel:(int)maxLevel
                   withDerivatives:(BOOL)withDerivatives
                         pyrBorder:(int)pyrBorder;

    Swift

    class func buildOpticalFlowPyramid(img: Mat, pyramid: NSMutableArray, winSize: Size2i, maxLevel: Int32, withDerivatives: Bool, pyrBorder: Int32) -> Int32

    Parameters

    img

    8-bit input image.

    pyramid

    output pyramid.

    winSize

    window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.

    maxLevel

    0-based maximal pyramid level number.

    withDerivatives

    set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally.

    pyrBorder

    the border mode for pyramid layers. to force data copying.

    Return Value

    number of levels in constructed pyramid. Can be less than maxLevel.

  • Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

    Declaration

    Objective-C

    + (int)buildOpticalFlowPyramid:(nonnull Mat *)img
                           pyramid:(nonnull NSMutableArray<Mat *> *)pyramid
                           winSize:(nonnull Size2i *)winSize
                          maxLevel:(int)maxLevel
                   withDerivatives:(BOOL)withDerivatives;

    Swift

    class func buildOpticalFlowPyramid(img: Mat, pyramid: NSMutableArray, winSize: Size2i, maxLevel: Int32, withDerivatives: Bool) -> Int32

    Parameters

    img

    8-bit input image.

    pyramid

    output pyramid.

    winSize

    window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.

    maxLevel

    0-based maximal pyramid level number.

    withDerivatives

    set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally. to force data copying.

    Return Value

    number of levels in constructed pyramid. Can be less than maxLevel.

  • Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

    Declaration

    Objective-C

    + (int)buildOpticalFlowPyramid:(nonnull Mat *)img
                           pyramid:(nonnull NSMutableArray<Mat *> *)pyramid
                           winSize:(nonnull Size2i *)winSize
                          maxLevel:(int)maxLevel;

    Swift

    class func buildOpticalFlowPyramid(img: Mat, pyramid: NSMutableArray, winSize: Size2i, maxLevel: Int32) -> Int32

    Parameters

    img

    8-bit input image.

    pyramid

    output pyramid.

    winSize

    window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.

    maxLevel

    0-based maximal pyramid level number. constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally. to force data copying.

    Return Value

    number of levels in constructed pyramid. Can be less than maxLevel.

  • Finds an object on a back projection image.

    Declaration

    Objective-C

    + (int)meanShift:(nonnull Mat *)probImage
              window:(nonnull Rect2i *)window
            criteria:(nonnull TermCriteria *)criteria;

    Swift

    class func meanShift(probImage: Mat, window: Rect2i, criteria: TermCriteria) -> Int32

    Parameters

    probImage

    Back projection of the object histogram. See calcBackProject for details.

    window

    Initial search window.

    criteria

    Stop criteria for the iterative search algorithm. returns : Number of iterations CAMSHIFT took to converge. The function implements the iterative object search algorithm. It takes the input back projection of an object and the initial position. The mass center in window of the back projection image is computed and the search window center shifts to the mass center. The procedure is repeated until the specified number of iterations criteria.maxCount is done or until the window center shifts by less than criteria.epsilon. The algorithm is used inside CamShift and, unlike CamShift , the search window size or orientation do not change during the search. You can simply pass the output of calcBackProject to this function. But better results can be obtained if you pre-filter the back projection and remove the noise. For example, you can do this by retrieving connected components with findContours , throwing away contours with small area ( contourArea ), and rendering the remaining contours with drawContours.

  • Computes a dense optical flow using the Gunnar Farneback’s algorithm.

    The function finds an optical flow for each prev pixel using the CITE: Farneback2003 algorithm so that

    \texttt{prev} (y,x) \sim \texttt{next} ( y + \texttt{flow} (y,x)[1], x + \texttt{flow} (y,x)[0])

    @note

    • An example using the optical flow algorithm described by Gunnar Farneback can be found at opencv_source_code/samples/cpp/fback.cpp
    • (Python) An example using the optical flow algorithm described by Gunnar Farneback can be found at opencv_source_code/samples/python/opt_flow.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowFarneback:(nonnull Mat *)prev
                                next:(nonnull Mat *)next
                                flow:(nonnull Mat *)flow
                           pyr_scale:(double)pyr_scale
                              levels:(int)levels
                             winsize:(int)winsize
                          iterations:(int)iterations
                              poly_n:(int)poly_n
                          poly_sigma:(double)poly_sigma
                               flags:(int)flags;

    Swift

    class func calcOpticalFlowFarneback(prev: Mat, next: Mat, flow: Mat, pyr_scale: Double, levels: Int32, winsize: Int32, iterations: Int32, poly_n: Int32, poly_sigma: Double, flags: Int32)

    Parameters

    prev

    first 8-bit single-channel input image.

    next

    second input image of the same size and the same type as prev.

    flow

    computed flow image that has the same size as prev and type CV_32FC2.

    pyr_scale

    parameter, specifying the image scale (<1) to build pyramids for each image; pyr_scale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one.

    levels

    number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used.

    winsize

    averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field.

    iterations

    number of iterations the algorithm does at each pyramid level.

    poly_n

    size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7.

    poly_sigma

    standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for poly_n=5, you can set poly_sigma=1.1, for poly_n=7, a good value would be poly_sigma=1.5.

    flags

    operation flags that can be a combination of the following:

    • OPTFLOW_USE_INITIAL_FLOW uses the input flow as an initial flow approximation.
    • OPTFLOW_FARNEBACK_GAUSSIAN uses the Gaussian
      \texttt{winsize}\times\texttt{winsize}
      filter instead of a box filter of the same size for optical flow estimation; usually, this option gives z more accurate flow than with a box filter, at the cost of lower speed; normally, winsize for a Gaussian window should be set to a larger value to achieve the same level of robustness.

  • Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See CITE: Bouguet00 . The function is parallelized with the TBB library.

    @note

    • An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/cpp/lkdemo.cpp
    • (Python) An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/python/lk_track.py
    • (Python) An example using the Lucas-Kanade tracker for homography matching can be found at opencv_source_code/samples/python/lk_homography.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowPyrLK:(nonnull Mat *)prevImg
                         nextImg:(nonnull Mat *)nextImg
                         prevPts:(nonnull Mat *)prevPts
                         nextPts:(nonnull Mat *)nextPts
                          status:(nonnull Mat *)status
                             err:(nonnull Mat *)err
                         winSize:(nonnull Size2i *)winSize
                        maxLevel:(int)maxLevel
                        criteria:(nonnull TermCriteria *)criteria
                           flags:(int)flags
                 minEigThreshold:(double)minEigThreshold;

    Swift

    class func calcOpticalFlowPyrLK(prevImg: Mat, nextImg: Mat, prevPts: Mat, nextPts: Mat, status: Mat, err: Mat, winSize: Size2i, maxLevel: Int32, criteria: TermCriteria, flags: Int32, minEigThreshold: Double)

    Parameters

    prevImg

    first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid.

    nextImg

    second input image or pyramid of the same size and the same type as prevImg.

    prevPts

    vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.

    nextPts

    output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.

    status

    output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.

    err

    output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn’t found then the error is not defined (use the status parameter to find such cases).

    winSize

    size of the search window at each pyramid level.

    maxLevel

    0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel.

    criteria

    parameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon.

    flags

    operation flags:

    • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
    • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure.

    minEigThreshold

    the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in CITE: Bouguet00), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

  • Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See CITE: Bouguet00 . The function is parallelized with the TBB library.

    @note

    • An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/cpp/lkdemo.cpp
    • (Python) An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/python/lk_track.py
    • (Python) An example using the Lucas-Kanade tracker for homography matching can be found at opencv_source_code/samples/python/lk_homography.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowPyrLK:(nonnull Mat *)prevImg
                         nextImg:(nonnull Mat *)nextImg
                         prevPts:(nonnull Mat *)prevPts
                         nextPts:(nonnull Mat *)nextPts
                          status:(nonnull Mat *)status
                             err:(nonnull Mat *)err
                         winSize:(nonnull Size2i *)winSize
                        maxLevel:(int)maxLevel
                        criteria:(nonnull TermCriteria *)criteria
                           flags:(int)flags;

    Swift

    class func calcOpticalFlowPyrLK(prevImg: Mat, nextImg: Mat, prevPts: Mat, nextPts: Mat, status: Mat, err: Mat, winSize: Size2i, maxLevel: Int32, criteria: TermCriteria, flags: Int32)

    Parameters

    prevImg

    first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid.

    nextImg

    second input image or pyramid of the same size and the same type as prevImg.

    prevPts

    vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.

    nextPts

    output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.

    status

    output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.

    err

    output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn’t found then the error is not defined (use the status parameter to find such cases).

    winSize

    size of the search window at each pyramid level.

    maxLevel

    0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel.

    criteria

    parameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon.

    flags

    operation flags:

    • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
    • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. optical flow equations (this matrix is called a spatial gradient matrix in CITE: Bouguet00), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

  • Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See CITE: Bouguet00 . The function is parallelized with the TBB library.

    @note

    • An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/cpp/lkdemo.cpp
    • (Python) An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/python/lk_track.py
    • (Python) An example using the Lucas-Kanade tracker for homography matching can be found at opencv_source_code/samples/python/lk_homography.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowPyrLK:(nonnull Mat *)prevImg
                         nextImg:(nonnull Mat *)nextImg
                         prevPts:(nonnull Mat *)prevPts
                         nextPts:(nonnull Mat *)nextPts
                          status:(nonnull Mat *)status
                             err:(nonnull Mat *)err
                         winSize:(nonnull Size2i *)winSize
                        maxLevel:(int)maxLevel
                        criteria:(nonnull TermCriteria *)criteria;

    Swift

    class func calcOpticalFlowPyrLK(prevImg: Mat, nextImg: Mat, prevPts: Mat, nextPts: Mat, status: Mat, err: Mat, winSize: Size2i, maxLevel: Int32, criteria: TermCriteria)

    Parameters

    prevImg

    first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid.

    nextImg

    second input image or pyramid of the same size and the same type as prevImg.

    prevPts

    vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.

    nextPts

    output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.

    status

    output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.

    err

    output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn’t found then the error is not defined (use the status parameter to find such cases).

    winSize

    size of the search window at each pyramid level.

    maxLevel

    0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel.

    criteria

    parameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon.

    • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
    • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. optical flow equations (this matrix is called a spatial gradient matrix in CITE: Bouguet00), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

  • Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See CITE: Bouguet00 . The function is parallelized with the TBB library.

    @note

    • An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/cpp/lkdemo.cpp
    • (Python) An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/python/lk_track.py
    • (Python) An example using the Lucas-Kanade tracker for homography matching can be found at opencv_source_code/samples/python/lk_homography.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowPyrLK:(nonnull Mat *)prevImg
                         nextImg:(nonnull Mat *)nextImg
                         prevPts:(nonnull Mat *)prevPts
                         nextPts:(nonnull Mat *)nextPts
                          status:(nonnull Mat *)status
                             err:(nonnull Mat *)err
                         winSize:(nonnull Size2i *)winSize
                        maxLevel:(int)maxLevel;

    Swift

    class func calcOpticalFlowPyrLK(prevImg: Mat, nextImg: Mat, prevPts: Mat, nextPts: Mat, status: Mat, err: Mat, winSize: Size2i, maxLevel: Int32)

    Parameters

    prevImg

    first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid.

    nextImg

    second input image or pyramid of the same size and the same type as prevImg.

    prevPts

    vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.

    nextPts

    output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.

    status

    output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.

    err

    output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn’t found then the error is not defined (use the status parameter to find such cases).

    winSize

    size of the search window at each pyramid level.

    maxLevel

    0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon.

    • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
    • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. optical flow equations (this matrix is called a spatial gradient matrix in CITE: Bouguet00), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

  • Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See CITE: Bouguet00 . The function is parallelized with the TBB library.

    @note

    • An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/cpp/lkdemo.cpp
    • (Python) An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/python/lk_track.py
    • (Python) An example using the Lucas-Kanade tracker for homography matching can be found at opencv_source_code/samples/python/lk_homography.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowPyrLK:(nonnull Mat *)prevImg
                         nextImg:(nonnull Mat *)nextImg
                         prevPts:(nonnull Mat *)prevPts
                         nextPts:(nonnull Mat *)nextPts
                          status:(nonnull Mat *)status
                             err:(nonnull Mat *)err
                         winSize:(nonnull Size2i *)winSize;

    Swift

    class func calcOpticalFlowPyrLK(prevImg: Mat, nextImg: Mat, prevPts: Mat, nextPts: Mat, status: Mat, err: Mat, winSize: Size2i)

    Parameters

    prevImg

    first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid.

    nextImg

    second input image or pyramid of the same size and the same type as prevImg.

    prevPts

    vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.

    nextPts

    output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.

    status

    output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.

    err

    output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn’t found then the error is not defined (use the status parameter to find such cases).

    winSize

    size of the search window at each pyramid level. level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon.

    • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
    • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. optical flow equations (this matrix is called a spatial gradient matrix in CITE: Bouguet00), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

  • Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.

    The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See CITE: Bouguet00 . The function is parallelized with the TBB library.

    @note

    • An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/cpp/lkdemo.cpp
    • (Python) An example using the Lucas-Kanade optical flow algorithm can be found at opencv_source_code/samples/python/lk_track.py
    • (Python) An example using the Lucas-Kanade tracker for homography matching can be found at opencv_source_code/samples/python/lk_homography.py

    Declaration

    Objective-C

    + (void)calcOpticalFlowPyrLK:(nonnull Mat *)prevImg
                         nextImg:(nonnull Mat *)nextImg
                         prevPts:(nonnull Mat *)prevPts
                         nextPts:(nonnull Mat *)nextPts
                          status:(nonnull Mat *)status
                             err:(nonnull Mat *)err;

    Swift

    class func calcOpticalFlowPyrLK(prevImg: Mat, nextImg: Mat, prevPts: Mat, nextPts: Mat, status: Mat, err: Mat)

    Parameters

    prevImg

    first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid.

    nextImg

    second input image or pyramid of the same size and the same type as prevImg.

    prevPts

    vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.

    nextPts

    output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.

    status

    output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.

    err

    output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn’t found then the error is not defined (use the status parameter to find such cases). level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon.

    • OPTFLOW_USE_INITIAL_FLOW uses initial estimations, stored in nextPts; if the flag is not set, then prevPts is copied to nextPts and is considered the initial estimate.
    • OPTFLOW_LK_GET_MIN_EIGENVALS use minimum eigen values as an error measure (see minEigThreshold description); if the flag is not set, then L1 distance between patches around the original and a moved point, divided by number of pixels in a window, is used as a error measure. optical flow equations (this matrix is called a spatial gradient matrix in CITE: Bouguet00), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.