Ximgproc

Objective-C

@interface Ximgproc : NSObject

Swift

class Ximgproc : NSObject

The Ximgproc module

Member classes: DisparityFilter, DisparityWLSFilter, DTFilter, GuidedFilter, AdaptiveManifoldFilter, FastBilateralSolverFilter, FastGlobalSmootherFilter, SuperpixelSLIC, RFFeatureGetter, StructuredEdgeDetection, SuperpixelLSC, EdgeBoxes, GraphSegmentation, SelectiveSearchSegmentationStrategy, SelectiveSearchSegmentationStrategyColor, SelectiveSearchSegmentationStrategySize, SelectiveSearchSegmentationStrategyTexture, SelectiveSearchSegmentationStrategyFill, SelectiveSearchSegmentationStrategyMultiple, SelectiveSearchSegmentation, ContourFitting, SparseMatchInterpolator, EdgeAwareInterpolator, RICInterpolator, RidgeDetectionFilter, SuperpixelSEEDS, FastLineDetector

Member enums: ThinningTypes, LocalBinarizationMethods, EdgeAwareFiltersList, SLICType, WMFWeightType, AngleRangeOption, HoughOp, HoughDeskewOption

Class Constants

  • Declaration

    Objective-C

    @property (class, readonly) int RO_IGNORE_BORDERS

    Swift

    class var RO_IGNORE_BORDERS: Int32 { get }
  • Declaration

    Objective-C

    @property (class, readonly) int RO_STRICT

    Swift

    class var RO_STRICT: Int32 { get }

Methods

  • Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

    For more details about Adaptive Manifold Filter parameters, see the original article CITE: Gastal12 .

    Note

    Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.

    Declaration

    Objective-C

    + (nonnull AdaptiveManifoldFilter *)createAMFilter:(double)sigma_s
                                               sigma_r:(double)sigma_r
                                       adjust_outliers:(BOOL)adjust_outliers;

    Swift

    class func createAMFilter(sigma_s: Double, sigma_r: Double, adjust_outliers: Bool) -> AdaptiveManifoldFilter

    Parameters

    sigma_s

    spatial standard deviation.

    sigma_r

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

    adjust_outliers

    optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

  • Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

    original paper.

    For more details about Adaptive Manifold Filter parameters, see the original article CITE: Gastal12 .

    Note

    Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions.

    Declaration

    Objective-C

    + (nonnull AdaptiveManifoldFilter *)createAMFilter:(double)sigma_s
                                               sigma_r:(double)sigma_r;

    Swift

    class func createAMFilter(sigma_s: Double, sigma_r: Double) -> AdaptiveManifoldFilter

    Parameters

    sigma_s

    spatial standard deviation.

    sigma_r

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

  • create ContourFitting algorithm object

    Declaration

    Objective-C

    + (nonnull ContourFitting *)createContourFitting:(int)ctr fd:(int)fd;

    Swift

    class func createContourFitting(ctr: Int32, fd: Int32) -> ContourFitting

    Parameters

    ctr

    number of Fourier descriptors equal to number of contour points after resampling.

    fd

    Contour defining second shape (Target).

  • create ContourFitting algorithm object

    Declaration

    Objective-C

    + (nonnull ContourFitting *)createContourFitting:(int)ctr;

    Swift

    class func createContourFitting(ctr: Int32) -> ContourFitting

    Parameters

    ctr

    number of Fourier descriptors equal to number of contour points after resampling.

  • create ContourFitting algorithm object

    Declaration

    Objective-C

    + (nonnull ContourFitting *)createContourFitting;

    Swift

    class func createContourFitting() -> ContourFitting
  • Factory method, create instance of DTFilter and produce initialization routines.

    For more details about Domain Transform filter parameters, see the original article CITE: Gastal11 and Domain Transform filter homepage.

    Declaration

    Objective-C

    + (nonnull DTFilter *)createDTFilter:(nonnull Mat *)guide
                            sigmaSpatial:(double)sigmaSpatial
                              sigmaColor:(double)sigmaColor
                                    mode:(EdgeAwareFiltersList)mode
                                numIters:(int)numIters;

    Swift

    class func createDTFilter(guide: Mat, sigmaSpatial: Double, sigmaColor: Double, mode: EdgeAwareFiltersList, numIters: Int32) -> DTFilter

    Parameters

    guide

    guided image (used to build transformed distance, which describes edge structure of guided image).

    sigmaSpatial

    {\sigma}_H
    parameter in the original article, it’s similar to the sigma in the coordinate space into bilateralFilter.

    sigmaColor

    {\sigma}_r
    parameter in the original article, it’s similar to the sigma in the color space into bilateralFilter.

    mode

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

    numIters

    optional number of iterations used for filtering, 3 is quite enough.

  • Factory method, create instance of DTFilter and produce initialization routines.

    For more details about Domain Transform filter parameters, see the original article CITE: Gastal11 and Domain Transform filter homepage.

    Declaration

    Objective-C

    + (nonnull DTFilter *)createDTFilter:(nonnull Mat *)guide
                            sigmaSpatial:(double)sigmaSpatial
                              sigmaColor:(double)sigmaColor
                                    mode:(EdgeAwareFiltersList)mode;

    Swift

    class func createDTFilter(guide: Mat, sigmaSpatial: Double, sigmaColor: Double, mode: EdgeAwareFiltersList) -> DTFilter

    Parameters

    guide

    guided image (used to build transformed distance, which describes edge structure of guided image).

    sigmaSpatial

    {\sigma}_H
    parameter in the original article, it’s similar to the sigma in the coordinate space into bilateralFilter.

    sigmaColor

    {\sigma}_r
    parameter in the original article, it’s similar to the sigma in the color space into bilateralFilter.

    mode

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

  • Factory method, create instance of DTFilter and produce initialization routines.

    filtering 2D signals in the article.

    For more details about Domain Transform filter parameters, see the original article CITE: Gastal11 and Domain Transform filter homepage.

    Declaration

    Objective-C

    + (nonnull DTFilter *)createDTFilter:(nonnull Mat *)guide
                            sigmaSpatial:(double)sigmaSpatial
                              sigmaColor:(double)sigmaColor;

    Swift

    class func createDTFilter(guide: Mat, sigmaSpatial: Double, sigmaColor: Double) -> DTFilter

    Parameters

    guide

    guided image (used to build transformed distance, which describes edge structure of guided image).

    sigmaSpatial

    {\sigma}_H
    parameter in the original article, it’s similar to the sigma in the coordinate space into bilateralFilter.

    sigmaColor

    {\sigma}_r
    parameter in the original article, it’s similar to the sigma in the color space into bilateralFilter.

  • Convenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM.

    Declaration

    Objective-C

    + (nonnull DisparityWLSFilter *)createDisparityWLSFilter:
        (nonnull StereoMatcher *)matcher_left;

    Swift

    class func createDisparityWLSFilter(matcher_left: StereoMatcher) -> DisparityWLSFilter

    Parameters

    matcher_left

    stereo matcher instance that will be used with the filter

  • More generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself.

    Declaration

    Objective-C

    + (nonnull DisparityWLSFilter *)createDisparityWLSFilterGeneric:
        (BOOL)use_confidence;

    Swift

    class func createDisparityWLSFilterGeneric(use_confidence: Bool) -> DisparityWLSFilter

    Parameters

    use_confidence

    filtering with confidence requires two disparity maps (for the left and right views) and is approximately two times slower. However, quality is typically significantly better.

  • Factory method that creates an instance of the EdgeAwareInterpolator.

    Declaration

    Objective-C

    + (nonnull EdgeAwareInterpolator *)createEdgeAwareInterpolator;

    Swift

    class func createEdgeAwareInterpolator() -> EdgeAwareInterpolator
  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag
                              edgeMergeThr:(float)edgeMergeThr
                             clusterMinMag:(float)clusterMinMag
                            maxAspectRatio:(float)maxAspectRatio
                                minBoxArea:(float)minBoxArea
                                     gamma:(float)gamma
                                     kappa:(float)kappa;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float, edgeMergeThr: Float, clusterMinMag: Float, maxAspectRatio: Float, minBoxArea: Float, gamma: Float, kappa: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

    edgeMergeThr

    edge merge threshold. Increase to trade off accuracy for speed.

    clusterMinMag

    cluster min magnitude. Increase to trade off accuracy for speed.

    maxAspectRatio

    max aspect ratio of boxes.

    minBoxArea

    minimum area of boxes.

    gamma

    affinity sensitivity.

    kappa

    scale sensitivity.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag
                              edgeMergeThr:(float)edgeMergeThr
                             clusterMinMag:(float)clusterMinMag
                            maxAspectRatio:(float)maxAspectRatio
                                minBoxArea:(float)minBoxArea
                                     gamma:(float)gamma;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float, edgeMergeThr: Float, clusterMinMag: Float, maxAspectRatio: Float, minBoxArea: Float, gamma: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

    edgeMergeThr

    edge merge threshold. Increase to trade off accuracy for speed.

    clusterMinMag

    cluster min magnitude. Increase to trade off accuracy for speed.

    maxAspectRatio

    max aspect ratio of boxes.

    minBoxArea

    minimum area of boxes.

    gamma

    affinity sensitivity.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag
                              edgeMergeThr:(float)edgeMergeThr
                             clusterMinMag:(float)clusterMinMag
                            maxAspectRatio:(float)maxAspectRatio
                                minBoxArea:(float)minBoxArea;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float, edgeMergeThr: Float, clusterMinMag: Float, maxAspectRatio: Float, minBoxArea: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

    edgeMergeThr

    edge merge threshold. Increase to trade off accuracy for speed.

    clusterMinMag

    cluster min magnitude. Increase to trade off accuracy for speed.

    maxAspectRatio

    max aspect ratio of boxes.

    minBoxArea

    minimum area of boxes.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag
                              edgeMergeThr:(float)edgeMergeThr
                             clusterMinMag:(float)clusterMinMag
                            maxAspectRatio:(float)maxAspectRatio;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float, edgeMergeThr: Float, clusterMinMag: Float, maxAspectRatio: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

    edgeMergeThr

    edge merge threshold. Increase to trade off accuracy for speed.

    clusterMinMag

    cluster min magnitude. Increase to trade off accuracy for speed.

    maxAspectRatio

    max aspect ratio of boxes.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag
                              edgeMergeThr:(float)edgeMergeThr
                             clusterMinMag:(float)clusterMinMag;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float, edgeMergeThr: Float, clusterMinMag: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

    edgeMergeThr

    edge merge threshold. Increase to trade off accuracy for speed.

    clusterMinMag

    cluster min magnitude. Increase to trade off accuracy for speed.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag
                              edgeMergeThr:(float)edgeMergeThr;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float, edgeMergeThr: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

    edgeMergeThr

    edge merge threshold. Increase to trade off accuracy for speed.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes
                                edgeMinMag:(float)edgeMinMag;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32, edgeMinMag: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

    edgeMinMag

    edge min magnitude. Increase to trade off accuracy for speed.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore
                                  maxBoxes:(int)maxBoxes;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float, maxBoxes: Int32) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

    maxBoxes

    max number of boxes to detect.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta
                                  minScore:(float)minScore;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float, minScore: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

    minScore

    min score of boxes to detect.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha
                                      beta:(float)beta
                                       eta:(float)eta;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float, eta: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

    eta

    adaptation rate for nms threshold.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha beta:(float)beta;

    Swift

    class func createEdgeBoxes(alpha: Float, beta: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

    beta

    nms threshold for object proposals.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes:(float)alpha;

    Swift

    class func createEdgeBoxes(alpha: Float) -> EdgeBoxes

    Parameters

    alpha

    step size of sliding window search.

  • Creates a Edgeboxes

    Declaration

    Objective-C

    + (nonnull EdgeBoxes *)createEdgeBoxes;

    Swift

    class func createEdgeBoxes() -> EdgeBoxes
  • Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Declaration

    Objective-C

    + (nonnull FastBilateralSolverFilter *)
        createFastBilateralSolverFilter:(nonnull Mat *)guide
                          sigma_spatial:(double)sigma_spatial
                             sigma_luma:(double)sigma_luma
                           sigma_chroma:(double)sigma_chroma
                                 lambda:(double)lambda
                               num_iter:(int)num_iter
                                max_tol:(double)max_tol;

    Swift

    class func createFastBilateralSolverFilter(guide: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double, lambda: Double, num_iter: Int32, max_tol: Double) -> FastBilateralSolverFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

    lambda

    smoothness strength parameter for solver.

    num_iter

    number of iterations used for solver, 25 is usually enough.

    max_tol

    convergence tolerance used for solver.

  • Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Declaration

    Objective-C

    + (nonnull FastBilateralSolverFilter *)
        createFastBilateralSolverFilter:(nonnull Mat *)guide
                          sigma_spatial:(double)sigma_spatial
                             sigma_luma:(double)sigma_luma
                           sigma_chroma:(double)sigma_chroma
                                 lambda:(double)lambda
                               num_iter:(int)num_iter;

    Swift

    class func createFastBilateralSolverFilter(guide: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double, lambda: Double, num_iter: Int32) -> FastBilateralSolverFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

    lambda

    smoothness strength parameter for solver.

    num_iter

    number of iterations used for solver, 25 is usually enough.

  • Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Declaration

    Objective-C

    + (nonnull FastBilateralSolverFilter *)
        createFastBilateralSolverFilter:(nonnull Mat *)guide
                          sigma_spatial:(double)sigma_spatial
                             sigma_luma:(double)sigma_luma
                           sigma_chroma:(double)sigma_chroma
                                 lambda:(double)lambda;

    Swift

    class func createFastBilateralSolverFilter(guide: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double, lambda: Double) -> FastBilateralSolverFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

    lambda

    smoothness strength parameter for solver.

  • Factory method, create instance of FastBilateralSolverFilter and execute the initialization routines.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Declaration

    Objective-C

    + (nonnull FastBilateralSolverFilter *)
        createFastBilateralSolverFilter:(nonnull Mat *)guide
                          sigma_spatial:(double)sigma_spatial
                             sigma_luma:(double)sigma_luma
                           sigma_chroma:(double)sigma_chroma;

    Swift

    class func createFastBilateralSolverFilter(guide: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double) -> FastBilateralSolverFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

  • Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

    For more details about Fast Global Smoother parameters, see the original paper CITE: Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.

    Declaration

    Objective-C

    + (nonnull FastGlobalSmootherFilter *)
        createFastGlobalSmootherFilter:(nonnull Mat *)guide
                                lambda:(double)lambda
                           sigma_color:(double)sigma_color
                    lambda_attenuation:(double)lambda_attenuation
                              num_iter:(int)num_iter;

    Swift

    class func createFastGlobalSmootherFilter(guide: Mat, lambda: Double, sigma_color: Double, lambda_attenuation: Double, num_iter: Int32) -> FastGlobalSmootherFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    lambda

    parameter defining the amount of regularization

    sigma_color

    parameter, that is similar to color space sigma in bilateralFilter.

    lambda_attenuation

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

    num_iter

    number of iterations used for filtering, 3 is usually enough.

  • Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

    For more details about Fast Global Smoother parameters, see the original paper CITE: Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.

    Declaration

    Objective-C

    + (nonnull FastGlobalSmootherFilter *)
        createFastGlobalSmootherFilter:(nonnull Mat *)guide
                                lambda:(double)lambda
                           sigma_color:(double)sigma_color
                    lambda_attenuation:(double)lambda_attenuation;

    Swift

    class func createFastGlobalSmootherFilter(guide: Mat, lambda: Double, sigma_color: Double, lambda_attenuation: Double) -> FastGlobalSmootherFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    lambda

    parameter defining the amount of regularization

    sigma_color

    parameter, that is similar to color space sigma in bilateralFilter.

    lambda_attenuation

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

  • Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

    it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

    For more details about Fast Global Smoother parameters, see the original paper CITE: Min2014. However, please note that there are several differences. Lambda attenuation described in the paper is implemented a bit differently so do not expect the results to be identical to those from the paper; sigma_color values from the paper should be multiplied by 255.0 to achieve the same effect. Also, in case of image filtering where source and guide image are the same, authors propose to dynamically update the guide image after each iteration. To maximize the performance this feature was not implemented here.

    Declaration

    Objective-C

    + (nonnull FastGlobalSmootherFilter *)
        createFastGlobalSmootherFilter:(nonnull Mat *)guide
                                lambda:(double)lambda
                           sigma_color:(double)sigma_color;

    Swift

    class func createFastGlobalSmootherFilter(guide: Mat, lambda: Double, sigma_color: Double) -> FastGlobalSmootherFilter

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    lambda

    parameter defining the amount of regularization

    sigma_color

    parameter, that is similar to color space sigma in bilateralFilter.

  • Creates a smart pointer to a FastLineDetector object and initializes it

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector:(int)_length_threshold
                                     _distance_threshold:(float)_distance_threshold
                                              _canny_th1:(double)_canny_th1
                                              _canny_th2:(double)_canny_th2
                                    _canny_aperture_size:(int)_canny_aperture_size
                                               _do_merge:(BOOL)_do_merge;

    Swift

    class func createFastLineDetector(_length_threshold: Int32, _distance_threshold: Float, _canny_th1: Double, _canny_th2: Double, _canny_aperture_size: Int32, _do_merge: Bool) -> FastLineDetector

    Parameters

    _length_threshold

    10 - Segment shorter than this will be discarded

    _distance_threshold

    1.41421356 - A point placed from a hypothesis line segment farther than this will be regarded as an outlier

    _canny_th1

    50 - First threshold for hysteresis procedure in Canny()

    _canny_th2

    50 - Second threshold for hysteresis procedure in Canny()

    _canny_aperture_size

    3 - Aperturesize for the sobel operator in Canny()

    _do_merge

    false - If true, incremental merging of segments will be perfomred

  • Creates a smart pointer to a FastLineDetector object and initializes it

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector:(int)_length_threshold
                                     _distance_threshold:(float)_distance_threshold
                                              _canny_th1:(double)_canny_th1
                                              _canny_th2:(double)_canny_th2
                                    _canny_aperture_size:(int)_canny_aperture_size;

    Swift

    class func createFastLineDetector(_length_threshold: Int32, _distance_threshold: Float, _canny_th1: Double, _canny_th2: Double, _canny_aperture_size: Int32) -> FastLineDetector

    Parameters

    _length_threshold

    10 - Segment shorter than this will be discarded

    _distance_threshold

    1.41421356 - A point placed from a hypothesis line segment farther than this will be regarded as an outlier

    _canny_th1

    50 - First threshold for hysteresis procedure in Canny()

    _canny_th2

    50 - Second threshold for hysteresis procedure in Canny()

    _canny_aperture_size

    3 - Aperturesize for the sobel operator in Canny() will be perfomred

  • Creates a smart pointer to a FastLineDetector object and initializes it

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector:(int)_length_threshold
                                     _distance_threshold:(float)_distance_threshold
                                              _canny_th1:(double)_canny_th1
                                              _canny_th2:(double)_canny_th2;

    Swift

    class func createFastLineDetector(_length_threshold: Int32, _distance_threshold: Float, _canny_th1: Double, _canny_th2: Double) -> FastLineDetector

    Parameters

    _length_threshold

    10 - Segment shorter than this will be discarded

    _distance_threshold

    1.41421356 - A point placed from a hypothesis line segment farther than this will be regarded as an outlier

    _canny_th1

    50 - First threshold for hysteresis procedure in Canny()

    _canny_th2

    50 - Second threshold for hysteresis procedure in Canny() operator in Canny() will be perfomred

  • Creates a smart pointer to a FastLineDetector object and initializes it

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector:(int)_length_threshold
                                     _distance_threshold:(float)_distance_threshold
                                              _canny_th1:(double)_canny_th1;

    Swift

    class func createFastLineDetector(_length_threshold: Int32, _distance_threshold: Float, _canny_th1: Double) -> FastLineDetector

    Parameters

    _length_threshold

    10 - Segment shorter than this will be discarded

    _distance_threshold

    1.41421356 - A point placed from a hypothesis line segment farther than this will be regarded as an outlier

    _canny_th1

    50 - First threshold for hysteresis procedure in Canny() hysteresis procedure in Canny() operator in Canny() will be perfomred

  • Creates a smart pointer to a FastLineDetector object and initializes it

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector:(int)_length_threshold
                                     _distance_threshold:(float)_distance_threshold;

    Swift

    class func createFastLineDetector(_length_threshold: Int32, _distance_threshold: Float) -> FastLineDetector

    Parameters

    _length_threshold

    10 - Segment shorter than this will be discarded

    _distance_threshold

    1.41421356 - A point placed from a hypothesis line segment farther than this will be regarded as an outlier hysteresis procedure in Canny() hysteresis procedure in Canny() operator in Canny() will be perfomred

  • Creates a smart pointer to a FastLineDetector object and initializes it

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector:(int)_length_threshold;

    Swift

    class func createFastLineDetector(_length_threshold: Int32) -> FastLineDetector

    Parameters

    _length_threshold

    10 - Segment shorter than this will be discarded segment farther than this will be regarded as an outlier hysteresis procedure in Canny() hysteresis procedure in Canny() operator in Canny() will be perfomred

  • Creates a smart pointer to a FastLineDetector object and initializes it

                                          segment farther than this will be
                                          regarded as an outlier
                                          hysteresis procedure in Canny()
                                          hysteresis procedure in Canny()
                                          operator in Canny()
                                          will be perfomred
    

    Declaration

    Objective-C

    + (nonnull FastLineDetector *)createFastLineDetector;

    Swift

    class func createFastLineDetector() -> FastLineDetector
  • Creates a graph based segmentor

    Declaration

    Objective-C

    + (nonnull GraphSegmentation *)createGraphSegmentation:(double)sigma
                                                         k:(float)k
                                                  min_size:(int)min_size;

    Swift

    class func createGraphSegmentation(sigma: Double, k: Float, min_size: Int32) -> GraphSegmentation

    Parameters

    sigma

    The sigma parameter, used to smooth image

    k

    The k parameter of the algorythm

    min_size

    The minimum size of segments

  • Creates a graph based segmentor

    Declaration

    Objective-C

    + (nonnull GraphSegmentation *)createGraphSegmentation:(double)sigma k:(float)k;

    Swift

    class func createGraphSegmentation(sigma: Double, k: Float) -> GraphSegmentation

    Parameters

    sigma

    The sigma parameter, used to smooth image

    k

    The k parameter of the algorythm

  • Creates a graph based segmentor

    Declaration

    Objective-C

    + (nonnull GraphSegmentation *)createGraphSegmentation:(double)sigma;

    Swift

    class func createGraphSegmentation(sigma: Double) -> GraphSegmentation

    Parameters

    sigma

    The sigma parameter, used to smooth image

  • Creates a graph based segmentor

    Declaration

    Objective-C

    + (nonnull GraphSegmentation *)createGraphSegmentation;

    Swift

    class func createGraphSegmentation() -> GraphSegmentation
  • Factory method, create instance of GuidedFilter and produce initialization routines.

    For more details about Guided Filter parameters, see the original article CITE: Kaiming10 .

    Declaration

    Objective-C

    + (nonnull GuidedFilter *)createGuidedFilter:(nonnull Mat *)guide
                                          radius:(int)radius
                                             eps:(double)eps;

    Swift

    class func createGuidedFilter(guide: Mat, radius: Int32, eps: Double) -> GuidedFilter

    Parameters

    guide

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

    radius

    radius of Guided Filter.

    eps

    regularization term of Guided Filter.

    {eps}^2
    is similar to the sigma in the color space into bilateralFilter.

  • Declaration

    Objective-C

    + (RFFeatureGetter*)createRFFeatureGetter NS_SWIFT_NAME(createRFFeatureGetter());

    Swift

    class func createRFFeatureGetter() -> RFFeatureGetter
  • Factory method that creates an instance of the RICInterpolator.

    Declaration

    Objective-C

    + (nonnull RICInterpolator *)createRICInterpolator;

    Swift

    class func createRICInterpolator() -> RICInterpolator
  • Create a new SelectiveSearchSegmentation class.

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentation *)createSelectiveSearchSegmentation;

    Swift

    class func createSelectiveSearchSegmentation() -> SelectiveSearchSegmentation
  • Create a new color-based strategy

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyColor *)
        createSelectiveSearchSegmentationStrategyColor;

    Swift

    class func createSelectiveSearchSegmentationStrategyColor() -> SelectiveSearchSegmentationStrategyColor
  • Create a new fill-based strategy

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyFill *)
        createSelectiveSearchSegmentationStrategyFill;

    Swift

    class func createSelectiveSearchSegmentationStrategyFill() -> SelectiveSearchSegmentationStrategyFill
  • Create a new multiple strategy and set four subtrategies, with equal weights

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyMultiple *)
        createSelectiveSearchSegmentationStrategyMultiple:
            (nonnull SelectiveSearchSegmentationStrategy *)s1
                                                       s2:(nonnull
                                                               SelectiveSearchSegmentationStrategy
                                                                   *)s2
                                                       s3:(nonnull
                                                               SelectiveSearchSegmentationStrategy
                                                                   *)s3
                                                       s4:(nonnull
                                                               SelectiveSearchSegmentationStrategy
                                                                   *)s4;

    Parameters

    s1

    The first strategy

    s2

    The second strategy

    s3

    The third strategy

    s4

    The forth strategy

  • Create a new multiple strategy and set three subtrategies, with equal weights

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyMultiple *)
        createSelectiveSearchSegmentationStrategyMultiple:
            (nonnull SelectiveSearchSegmentationStrategy *)s1
                                                       s2:(nonnull
                                                               SelectiveSearchSegmentationStrategy
                                                                   *)s2
                                                       s3:(nonnull
                                                               SelectiveSearchSegmentationStrategy
                                                                   *)s3;

    Parameters

    s1

    The first strategy

    s2

    The second strategy

    s3

    The third strategy

  • Create a new multiple strategy and set two subtrategies, with equal weights

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyMultiple *)
        createSelectiveSearchSegmentationStrategyMultiple:
            (nonnull SelectiveSearchSegmentationStrategy *)s1
                                                       s2:(nonnull
                                                               SelectiveSearchSegmentationStrategy
                                                                   *)s2;

    Swift

    class func createSelectiveSearchSegmentationStrategyMultiple(s1: SelectiveSearchSegmentationStrategy, s2: SelectiveSearchSegmentationStrategy) -> SelectiveSearchSegmentationStrategyMultiple

    Parameters

    s1

    The first strategy

    s2

    The second strategy

  • Create a new multiple strategy and set one subtrategy

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyMultiple *)
        createSelectiveSearchSegmentationStrategyMultiple:
            (nonnull SelectiveSearchSegmentationStrategy *)s1;

    Swift

    class func createSelectiveSearchSegmentationStrategyMultiple(s1: SelectiveSearchSegmentationStrategy) -> SelectiveSearchSegmentationStrategyMultiple

    Parameters

    s1

    The first strategy

  • Create a new multiple strategy

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyMultiple *)
        createSelectiveSearchSegmentationStrategyMultiple;

    Swift

    class func createSelectiveSearchSegmentationStrategyMultiple() -> SelectiveSearchSegmentationStrategyMultiple
  • Create a new size-based strategy

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategySize *)
        createSelectiveSearchSegmentationStrategySize;

    Swift

    class func createSelectiveSearchSegmentationStrategySize() -> SelectiveSearchSegmentationStrategySize
  • Create a new size-based strategy

    Declaration

    Objective-C

    + (nonnull SelectiveSearchSegmentationStrategyTexture *)
        createSelectiveSearchSegmentationStrategyTexture;

    Swift

    class func createSelectiveSearchSegmentationStrategyTexture() -> SelectiveSearchSegmentationStrategyTexture
  • Convenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence.

    Declaration

    Objective-C

    + (nonnull StereoMatcher *)createRightMatcher:
        (nonnull StereoMatcher *)matcher_left;

    Swift

    class func createRightMatcher(matcher_left: StereoMatcher) -> StereoMatcher

    Parameters

    matcher_left

    main stereo matcher instance that will be used with the filter

  • Declaration

    Objective-C

    + (StructuredEdgeDetection*)createStructuredEdgeDetection:(NSString*)model howToGetFeatures:(RFFeatureGetter*)howToGetFeatures NS_SWIFT_NAME(createStructuredEdgeDetection(model:howToGetFeatures:));

    Swift

    class func createStructuredEdgeDetection(model: String, howToGetFeatures: RFFeatureGetter) -> StructuredEdgeDetection
  • Declaration

    Objective-C

    + (StructuredEdgeDetection*)createStructuredEdgeDetection:(NSString*)model NS_SWIFT_NAME(createStructuredEdgeDetection(model:));

    Swift

    class func createStructuredEdgeDetection(model: String) -> StructuredEdgeDetection
  • Class implementing the LSC (Linear Spectral Clustering) superpixels

    The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelLSC *)createSuperpixelLSC:(nonnull Mat *)image
                                       region_size:(int)region_size
                                             ratio:(float)ratio;

    Swift

    class func createSuperpixelLSC(image: Mat, region_size: Int32, ratio: Float) -> SuperpixelLSC

    Parameters

    image

    Image to segment

    region_size

    Chooses an average superpixel size measured in pixels

    ratio

    Chooses the enforcement of superpixel compactness factor of superpixel

  • Class implementing the LSC (Linear Spectral Clustering) superpixels

    The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelLSC *)createSuperpixelLSC:(nonnull Mat *)image
                                       region_size:(int)region_size;

    Swift

    class func createSuperpixelLSC(image: Mat, region_size: Int32) -> SuperpixelLSC

    Parameters

    image

    Image to segment

    region_size

    Chooses an average superpixel size measured in pixels

  • Class implementing the LSC (Linear Spectral Clustering) superpixels

    The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. An example of LSC is ilustrated in the following picture. For enanched results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelLSC *)createSuperpixelLSC:(nonnull Mat *)image;

    Swift

    class func createSuperpixelLSC(image: Mat) -> SuperpixelLSC

    Parameters

    image

    Image to segment

  • Initializes a SuperpixelSEEDS object.

    The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.

    The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSEEDS *)createSuperpixelSEEDS:(int)image_width
                                          image_height:(int)image_height
                                        image_channels:(int)image_channels
                                       num_superpixels:(int)num_superpixels
                                            num_levels:(int)num_levels
                                                 prior:(int)prior
                                        histogram_bins:(int)histogram_bins
                                           double_step:(BOOL)double_step;

    Swift

    class func createSuperpixelSEEDS(image_width: Int32, image_height: Int32, image_channels: Int32, num_superpixels: Int32, num_levels: Int32, prior: Int32, histogram_bins: Int32, double_step: Bool) -> SuperpixelSEEDS

    Parameters

    image_width

    Image width.

    image_height

    Image height.

    image_channels

    Number of channels of the image.

    num_superpixels

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

    num_levels

    Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.

    prior

    enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].

    histogram_bins

    Number of histogram bins.

    double_step

    If true, iterate each block level twice for higher accuracy.

  • Initializes a SuperpixelSEEDS object.

    The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.

    The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSEEDS *)createSuperpixelSEEDS:(int)image_width
                                          image_height:(int)image_height
                                        image_channels:(int)image_channels
                                       num_superpixels:(int)num_superpixels
                                            num_levels:(int)num_levels
                                                 prior:(int)prior
                                        histogram_bins:(int)histogram_bins;

    Swift

    class func createSuperpixelSEEDS(image_width: Int32, image_height: Int32, image_channels: Int32, num_superpixels: Int32, num_levels: Int32, prior: Int32, histogram_bins: Int32) -> SuperpixelSEEDS

    Parameters

    image_width

    Image width.

    image_height

    Image height.

    image_channels

    Number of channels of the image.

    num_superpixels

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

    num_levels

    Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.

    prior

    enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].

    histogram_bins

    Number of histogram bins.

  • Initializes a SuperpixelSEEDS object.

    The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.

    The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSEEDS *)createSuperpixelSEEDS:(int)image_width
                                          image_height:(int)image_height
                                        image_channels:(int)image_channels
                                       num_superpixels:(int)num_superpixels
                                            num_levels:(int)num_levels
                                                 prior:(int)prior;

    Swift

    class func createSuperpixelSEEDS(image_width: Int32, image_height: Int32, image_channels: Int32, num_superpixels: Int32, num_levels: Int32, prior: Int32) -> SuperpixelSEEDS

    Parameters

    image_width

    Image width.

    image_height

    Image height.

    image_channels

    Number of channels of the image.

    num_superpixels

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

    num_levels

    Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.

    prior

    enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range [0, 5].

  • Initializes a SuperpixelSEEDS object.

    The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels. It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.

    The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization. The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image. The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSEEDS *)createSuperpixelSEEDS:(int)image_width
                                          image_height:(int)image_height
                                        image_channels:(int)image_channels
                                       num_superpixels:(int)num_superpixels
                                            num_levels:(int)num_levels;

    Swift

    class func createSuperpixelSEEDS(image_width: Int32, image_height: Int32, image_channels: Int32, num_superpixels: Int32, num_levels: Int32) -> SuperpixelSEEDS

    Parameters

    image_width

    Image width.

    image_height

    Image height.

    image_channels

    Number of channels of the image.

    num_superpixels

    Desired number of superpixels. Note that the actual number may be smaller due to restrictions (depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

    num_levels

    Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time. must be in the range [0, 5].

  • Initialize a SuperpixelSLIC object

    The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSLIC *)createSuperpixelSLIC:(nonnull Mat *)image
                                           algorithm:(SLICType)algorithm
                                         region_size:(int)region_size
                                               ruler:(float)ruler;

    Swift

    class func createSuperpixelSLIC(image: Mat, algorithm: SLICType, region_size: Int32, ruler: Float) -> SuperpixelSLIC

    Parameters

    image

    Image to segment

    algorithm

    Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.

    region_size

    Chooses an average superpixel size measured in pixels

    ruler

    Chooses the enforcement of superpixel smoothness factor of superpixel

  • Initialize a SuperpixelSLIC object

    The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSLIC *)createSuperpixelSLIC:(nonnull Mat *)image
                                           algorithm:(SLICType)algorithm
                                         region_size:(int)region_size;

    Swift

    class func createSuperpixelSLIC(image: Mat, algorithm: SLICType, region_size: Int32) -> SuperpixelSLIC

    Parameters

    image

    Image to segment

    algorithm

    Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.

    region_size

    Chooses an average superpixel size measured in pixels

  • Initialize a SuperpixelSLIC object

    The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSLIC *)createSuperpixelSLIC:(nonnull Mat *)image
                                           algorithm:(SLICType)algorithm;

    Swift

    class func createSuperpixelSLIC(image: Mat, algorithm: SLICType) -> SuperpixelSLIC

    Parameters

    image

    Image to segment

    algorithm

    Chooses the algorithm variant to use: SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.

  • Initialize a SuperpixelSLIC object

    The function initializes a SuperpixelSLIC object for the input image. It sets the parameters of choosed superpixel algorithm, which are: region_size and ruler. It preallocate some buffers for future computing iterations over the given image. For enanched results it is recommended for color images to preprocess image with little gaussian blur using a small 3 x 3 kernel and additional conversion into CieLAB color space. An example of SLIC versus SLICO and MSLIC is ilustrated in the following picture.

    image

    Declaration

    Objective-C

    + (nonnull SuperpixelSLIC *)createSuperpixelSLIC:(nonnull Mat *)image;

    Swift

    class func createSuperpixelSLIC(image: Mat) -> SuperpixelSLIC

    Parameters

    image

    Image to segment SLIC segments image using a desired region_size, and in addition SLICO will optimize using adaptive compactness factor, while MSLIC will optimize using manifold methods resulting in more content-sensitive superpixels.

  • Calculates coordinates of line segment corresponded by point in Hough space. @retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image.

    The function calculates coordinates of line segment corresponded by point in Hough space.

    Declaration

    Objective-C

    + (nonnull Int4 *)HoughPoint2Line:(nonnull Point2i *)houghPoint
                           srcImgInfo:(nonnull Mat *)srcImgInfo
                           angleRange:(AngleRangeOption)angleRange
                             makeSkew:(HoughDeskewOption)makeSkew
                                rules:(int)rules;

    Swift

    class func HoughPoint2Line(houghPoint: Point2i, srcImgInfo: Mat, angleRange: AngleRangeOption, makeSkew: HoughDeskewOption, rules: Int32) -> Int4
  • Calculates coordinates of line segment corresponded by point in Hough space. @retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image.

    The function calculates coordinates of line segment corresponded by point in Hough space.

    Declaration

    Objective-C

    + (nonnull Int4 *)HoughPoint2Line:(nonnull Point2i *)houghPoint
                           srcImgInfo:(nonnull Mat *)srcImgInfo
                           angleRange:(AngleRangeOption)angleRange
                             makeSkew:(HoughDeskewOption)makeSkew;

    Swift

    class func HoughPoint2Line(houghPoint: Point2i, srcImgInfo: Mat, angleRange: AngleRangeOption, makeSkew: HoughDeskewOption) -> Int4
  • Calculates coordinates of line segment corresponded by point in Hough space. @retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image.

    The function calculates coordinates of line segment corresponded by point in Hough space.

    Declaration

    Objective-C

    + (nonnull Int4 *)HoughPoint2Line:(nonnull Point2i *)houghPoint
                           srcImgInfo:(nonnull Mat *)srcImgInfo
                           angleRange:(AngleRangeOption)angleRange;

    Swift

    class func HoughPoint2Line(houghPoint: Point2i, srcImgInfo: Mat, angleRange: AngleRangeOption) -> Int4
  • Calculates coordinates of line segment corresponded by point in Hough space. @retval [Vec4i] Coordinates of line segment corresponded by point in Hough space. @remarks If rules parameter set to RO_STRICT then returned line cut along the border of source image. @remarks If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image.

    The function calculates coordinates of line segment corresponded by point in Hough space.

    Declaration

    Objective-C

    + (nonnull Int4 *)HoughPoint2Line:(nonnull Point2i *)houghPoint
                           srcImgInfo:(nonnull Mat *)srcImgInfo;

    Swift

    class func HoughPoint2Line(houghPoint: Point2i, srcImgInfo: Mat) -> Int4
  • Calculates 2D Fast Hough transform of an image.

    The function calculates the fast Hough transform for full, half or quarter range of angles.

    Declaration

    Objective-C

    + (void)FastHoughTransform:(nonnull Mat *)src
                           dst:(nonnull Mat *)dst
                   dstMatDepth:(int)dstMatDepth
                    angleRange:(AngleRangeOption)angleRange
                            op:(HoughOp)op
                      makeSkew:(HoughDeskewOption)makeSkew;

    Swift

    class func FastHoughTransform(src: Mat, dst: Mat, dstMatDepth: Int32, angleRange: AngleRangeOption, op: HoughOp, makeSkew: HoughDeskewOption)
  • Calculates 2D Fast Hough transform of an image.

    The function calculates the fast Hough transform for full, half or quarter range of angles.

    Declaration

    Objective-C

    + (void)FastHoughTransform:(nonnull Mat *)src
                           dst:(nonnull Mat *)dst
                   dstMatDepth:(int)dstMatDepth
                    angleRange:(AngleRangeOption)angleRange
                            op:(HoughOp)op;

    Swift

    class func FastHoughTransform(src: Mat, dst: Mat, dstMatDepth: Int32, angleRange: AngleRangeOption, op: HoughOp)
  • Calculates 2D Fast Hough transform of an image.

    The function calculates the fast Hough transform for full, half or quarter range of angles.

    Declaration

    Objective-C

    + (void)FastHoughTransform:(nonnull Mat *)src
                           dst:(nonnull Mat *)dst
                   dstMatDepth:(int)dstMatDepth
                    angleRange:(AngleRangeOption)angleRange;

    Swift

    class func FastHoughTransform(src: Mat, dst: Mat, dstMatDepth: Int32, angleRange: AngleRangeOption)
  • Calculates 2D Fast Hough transform of an image.

    The function calculates the fast Hough transform for full, half or quarter range of angles.

    Declaration

    Objective-C

    + (void)FastHoughTransform:(nonnull Mat *)src
                           dst:(nonnull Mat *)dst
                   dstMatDepth:(int)dstMatDepth;

    Swift

    class func FastHoughTransform(src: Mat, dst: Mat, dstMatDepth: Int32)
  • Applies X Deriche filter to an image.

    For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf

    Declaration

    Objective-C

    + (void)GradientDericheX:(nonnull Mat *)op
                         dst:(nonnull Mat *)dst
                       alpha:(double)alpha
                       omega:(double)omega;

    Swift

    class func GradientDericheX(op: Mat, dst: Mat, alpha: Double, omega: Double)
  • Applies Y Deriche filter to an image.

    For more details about this implementation, please see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.476.5736&rep=rep1&type=pdf

    Declaration

    Objective-C

    + (void)GradientDericheY:(nonnull Mat *)op
                         dst:(nonnull Mat *)dst
                       alpha:(double)alpha
                       omega:(double)omega;

    Swift

    class func GradientDericheY(op: Mat, dst: Mat, alpha: Double, omega: Double)
  • Declaration

    Objective-C

    + (void)PeiLinNormalization:(Mat*)I T:(Mat*)T NS_SWIFT_NAME(PeiLinNormalization(I:T:));

    Swift

    class func PeiLinNormalization(I: Mat, T: Mat)
  • Simple one-line Adaptive Manifold Filter call.

    Note

    Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. - see: bilateralFilter, +dtFilter:src:dst:sigmaSpatial:sigmaColor:mode:numIters:, +guidedFilter:src:dst:radius:eps:dDepth:

    Declaration

    Objective-C

    + (void)amFilter:(nonnull Mat *)joint
                    src:(nonnull Mat *)src
                    dst:(nonnull Mat *)dst
                sigma_s:(double)sigma_s
                sigma_r:(double)sigma_r
        adjust_outliers:(BOOL)adjust_outliers;

    Swift

    class func amFilter(joint: Mat, src: Mat, dst: Mat, sigma_s: Double, sigma_r: Double, adjust_outliers: Bool)

    Parameters

    joint

    joint (also called as guided) image or array of images with any numbers of channels.

    src

    filtering image with any numbers of channels.

    dst

    output image.

    sigma_s

    spatial standard deviation.

    sigma_r

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

    adjust_outliers

    optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

  • Simple one-line Adaptive Manifold Filter call.

    original paper.

    Note

    Joint images with CV_8U and CV_16U depth converted to images with CV_32F depth and [0; 1] color range before processing. Hence color space sigma sigma_r must be in [0; 1] range, unlike same sigmas in bilateralFilter and dtFilter functions. - see: bilateralFilter, +dtFilter:src:dst:sigmaSpatial:sigmaColor:mode:numIters:, +guidedFilter:src:dst:radius:eps:dDepth:

    Declaration

    Objective-C

    + (void)amFilter:(nonnull Mat *)joint
                 src:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
             sigma_s:(double)sigma_s
             sigma_r:(double)sigma_r;

    Swift

    class func amFilter(joint: Mat, src: Mat, dst: Mat, sigma_s: Double, sigma_r: Double)

    Parameters

    joint

    joint (also called as guided) image or array of images with any numbers of channels.

    src

    filtering image with any numbers of channels.

    dst

    output image.

    sigma_s

    spatial standard deviation.

    sigma_r

    color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

  • Performs anisotropic diffusion on an image.

    The function applies Perona-Malik anisotropic diffusion to an image. This is the solution to the partial differential equation:

    {\frac {\partial I}{\partial t}}={\mathrm {div}}\left(c(x,y,t)\nabla I\right)=\nabla c\cdot \nabla I+c(x,y,t)\Delta I

    Suggested functions for c(x,y,t) are:

    c\left(\|\nabla I\|\right)=e^{{-\left(\|\nabla I\|/K\right)^{2}}}

    or

    c\left(\|\nabla I\|\right)={\frac {1}{1+\left({\frac {\|\nabla I\|}{K}}\right)^{2}}}

    Declaration

    Objective-C

    + (void)anisotropicDiffusion:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                           alpha:(float)alpha
                               K:(float)K
                          niters:(int)niters;

    Swift

    class func anisotropicDiffusion(src: Mat, dst: Mat, alpha: Float, K: Float, niters: Int32)
  • Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.

    Declaration

    Objective-C

    + (void)bilateralTextureFilter:(nonnull Mat *)src
                               dst:(nonnull Mat *)dst
                                fr:(int)fr
                           numIter:(int)numIter
                        sigmaAlpha:(double)sigmaAlpha
                          sigmaAvg:(double)sigmaAvg;

    Swift

    class func bilateralTextureFilter(src: Mat, dst: Mat, fr: Int32, numIter: Int32, sigmaAlpha: Double, sigmaAvg: Double)

    Parameters

    src

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

    dst

    Destination image of the same size and type as src.

    fr

    Radius of kernel to be used for filtering. It should be positive integer

    numIter

    Number of iterations of algorithm, It should be positive integer

    sigmaAlpha

    Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.

    sigmaAvg

    Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.

  • Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.

    value is negative, it is automatically calculated as described in the paper.

    Declaration

    Objective-C

    + (void)bilateralTextureFilter:(nonnull Mat *)src
                               dst:(nonnull Mat *)dst
                                fr:(int)fr
                           numIter:(int)numIter
                        sigmaAlpha:(double)sigmaAlpha;

    Swift

    class func bilateralTextureFilter(src: Mat, dst: Mat, fr: Int32, numIter: Int32, sigmaAlpha: Double)

    Parameters

    src

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

    dst

    Destination image of the same size and type as src.

    fr

    Radius of kernel to be used for filtering. It should be positive integer

    numIter

    Number of iterations of algorithm, It should be positive integer

    sigmaAlpha

    Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition. When the value is negative, it is automatically calculated.

  • Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.

    a bigger value means sharper transition. When the value is negative, it is automatically calculated.

    value is negative, it is automatically calculated as described in the paper.

    Declaration

    Objective-C

    + (void)bilateralTextureFilter:(nonnull Mat *)src
                               dst:(nonnull Mat *)dst
                                fr:(int)fr
                           numIter:(int)numIter;

    Swift

    class func bilateralTextureFilter(src: Mat, dst: Mat, fr: Int32, numIter: Int32)

    Parameters

    src

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

    dst

    Destination image of the same size and type as src.

    fr

    Radius of kernel to be used for filtering. It should be positive integer

    numIter

    Number of iterations of algorithm, It should be positive integer

  • Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.

    a bigger value means sharper transition. When the value is negative, it is automatically calculated.

    value is negative, it is automatically calculated as described in the paper.

    Declaration

    Objective-C

    + (void)bilateralTextureFilter:(nonnull Mat *)src
                               dst:(nonnull Mat *)dst
                                fr:(int)fr;

    Swift

    class func bilateralTextureFilter(src: Mat, dst: Mat, fr: Int32)

    Parameters

    src

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

    dst

    Destination image of the same size and type as src.

    fr

    Radius of kernel to be used for filtering. It should be positive integer

  • Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see CITE: Cho2014.

    a bigger value means sharper transition. When the value is negative, it is automatically calculated.

    value is negative, it is automatically calculated as described in the paper.

    Declaration

    Objective-C

    + (void)bilateralTextureFilter:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func bilateralTextureFilter(src: Mat, dst: Mat)

    Parameters

    src

    Source image whose depth is 8-bit UINT or 32-bit FLOAT

    dst

    Destination image of the same size and type as src.

  • Compares a color template against overlapped color image regions.

    Declaration

    Objective-C

    + (void)colorMatchTemplate:(nonnull Mat *)img
                         templ:(nonnull Mat *)templ
                        result:(nonnull Mat *)result;

    Swift

    class func colorMatchTemplate(img: Mat, templ: Mat, result: Mat)
  • Contour sampling .

    Declaration

    Objective-C

    + (void)contourSampling:(nonnull Mat *)src
                        out:(nonnull Mat *)out
                      nbElt:(int)nbElt;

    Swift

    class func contourSampling(src: Mat, out: Mat, nbElt: Int32)
  • Computes the estimated covariance matrix of an image using the sliding window forumlation.

    Declaration

    Objective-C

    + (void)covarianceEstimation:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                      windowRows:(int)windowRows
                      windowCols:(int)windowCols;

    Swift

    class func covarianceEstimation(src: Mat, dst: Mat, windowRows: Int32, windowCols: Int32)

    Parameters

    src

    The source image. Input image must be of a complex type.

    dst

    The destination estimated covariance matrix. Output matrix will be size (windowRows*windowCols, windowRows*windowCols).

    windowRows

    The number of rows in the window.

    windowCols

    The number of cols in the window. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner. Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.

  • creates a quaternion image.

    Declaration

    Objective-C

    + (void)createQuaternionImage:(nonnull Mat *)img qimg:(nonnull Mat *)qimg;

    Swift

    class func createQuaternionImage(img: Mat, qimg: Mat)
  • Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

    Declaration

    Objective-C

    + (void)dtFilter:(nonnull Mat *)guide
                 src:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
        sigmaSpatial:(double)sigmaSpatial
          sigmaColor:(double)sigmaColor
                mode:(EdgeAwareFiltersList)mode
            numIters:(int)numIters;

    Swift

    class func dtFilter(guide: Mat, src: Mat, dst: Mat, sigmaSpatial: Double, sigmaColor: Double, mode: EdgeAwareFiltersList, numIters: Int32)

    Parameters

    guide

    guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

    src

    filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

    dst

    destination image

    sigmaSpatial

    {\sigma}_H
    parameter in the original article, it’s similar to the sigma in the coordinate space into bilateralFilter.

    sigmaColor

    {\sigma}_r
    parameter in the original article, it’s similar to the sigma in the color space into bilateralFilter.

    mode

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

    numIters

    optional number of iterations used for filtering, 3 is quite enough.

  • Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

    Declaration

    Objective-C

    + (void)dtFilter:(nonnull Mat *)guide
                 src:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
        sigmaSpatial:(double)sigmaSpatial
          sigmaColor:(double)sigmaColor
                mode:(EdgeAwareFiltersList)mode;

    Swift

    class func dtFilter(guide: Mat, src: Mat, dst: Mat, sigmaSpatial: Double, sigmaColor: Double, mode: EdgeAwareFiltersList)

    Parameters

    guide

    guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

    src

    filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

    dst

    destination image

    sigmaSpatial

    {\sigma}_H
    parameter in the original article, it’s similar to the sigma in the coordinate space into bilateralFilter.

    sigmaColor

    {\sigma}_r
    parameter in the original article, it’s similar to the sigma in the color space into bilateralFilter.

    mode

    one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

  • Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

    Declaration

    Objective-C

    + (void)dtFilter:(nonnull Mat *)guide
                 src:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
        sigmaSpatial:(double)sigmaSpatial
          sigmaColor:(double)sigmaColor;

    Swift

    class func dtFilter(guide: Mat, src: Mat, dst: Mat, sigmaSpatial: Double, sigmaColor: Double)

    Parameters

    guide

    guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

    src

    filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

    dst

    destination image

    sigmaSpatial

    {\sigma}_H
    parameter in the original article, it’s similar to the sigma in the coordinate space into bilateralFilter.

    sigmaColor

    {\sigma}_r
    parameter in the original article, it’s similar to the sigma in the color space into bilateralFilter. filtering 2D signals in the article.

  • Smoothes an image using the Edge-Preserving filter.

    The function smoothes Gaussian noise as well as salt & pepper noise. For more details about this implementation, please see [ReiWoe18] Reich, S. and Wörgötter, F. and Dellen, B. (2018). A Real-Time Edge-Preserving Denoising Filter. Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP): Visapp, 85-94, 4. DOI: 10.5220/0006509000850094.

    Declaration

    Objective-C

    + (void)edgePreservingFilter:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               d:(int)d
                       threshold:(double)threshold;

    Swift

    class func edgePreservingFilter(src: Mat, dst: Mat, d: Int32, threshold: Double)

    Parameters

    src

    Source 8-bit 3-channel image.

    dst

    Destination image of the same size and type as src.

    d

    Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.

    threshold

    Threshold, which distinguishes between noise, outliers, and data.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst
                        sigma_spatial:(double)sigma_spatial
                           sigma_luma:(double)sigma_luma
                         sigma_chroma:(double)sigma_chroma
                               lambda:(double)lambda
                             num_iter:(int)num_iter
                              max_tol:(double)max_tol;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double, lambda: Double, num_iter: Int32, max_tol: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

    lambda

    smoothness strength parameter for solver.

    num_iter

    number of iterations used for solver, 25 is usually enough.

    max_tol

    convergence tolerance used for solver.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst
                        sigma_spatial:(double)sigma_spatial
                           sigma_luma:(double)sigma_luma
                         sigma_chroma:(double)sigma_chroma
                               lambda:(double)lambda
                             num_iter:(int)num_iter;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double, lambda: Double, num_iter: Int32)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

    lambda

    smoothness strength parameter for solver.

    num_iter

    number of iterations used for solver, 25 is usually enough.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst
                        sigma_spatial:(double)sigma_spatial
                           sigma_luma:(double)sigma_luma
                         sigma_chroma:(double)sigma_chroma
                               lambda:(double)lambda;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double, lambda: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

    lambda

    smoothness strength parameter for solver.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst
                        sigma_spatial:(double)sigma_spatial
                           sigma_luma:(double)sigma_luma
                         sigma_chroma:(double)sigma_chroma;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat, sigma_spatial: Double, sigma_luma: Double, sigma_chroma: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

    sigma_chroma

    parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst
                        sigma_spatial:(double)sigma_spatial
                           sigma_luma:(double)sigma_luma;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat, sigma_spatial: Double, sigma_luma: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

    sigma_luma

    parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst
                        sigma_spatial:(double)sigma_spatial;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat, sigma_spatial: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

    sigma_spatial

    parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

  • Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

    For more details about the Fast Bilateral Solver parameters, see the original paper CITE: BarronPoole2016.

    Note

    Confidence images with CV_8U depth are expected to in [0, 255] and CV_32F in [0, 1] range.

    Declaration

    Objective-C

    + (void)fastBilateralSolverFilter:(nonnull Mat *)guide
                                  src:(nonnull Mat *)src
                           confidence:(nonnull Mat *)confidence
                                  dst:(nonnull Mat *)dst;

    Swift

    class func fastBilateralSolverFilter(guide: Mat, src: Mat, confidence: Mat, dst: Mat)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    confidence

    confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

    dst

    destination image.

  • Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

    Declaration

    Objective-C

    + (void)fastGlobalSmootherFilter:(nonnull Mat *)guide
                                 src:(nonnull Mat *)src
                                 dst:(nonnull Mat *)dst
                              lambda:(double)lambda
                         sigma_color:(double)sigma_color
                  lambda_attenuation:(double)lambda_attenuation
                            num_iter:(int)num_iter;

    Swift

    class func fastGlobalSmootherFilter(guide: Mat, src: Mat, dst: Mat, lambda: Double, sigma_color: Double, lambda_attenuation: Double, num_iter: Int32)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    dst

    destination image.

    lambda

    parameter defining the amount of regularization

    sigma_color

    parameter, that is similar to color space sigma in bilateralFilter.

    lambda_attenuation

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

    num_iter

    number of iterations used for filtering, 3 is usually enough.

  • Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

    Declaration

    Objective-C

    + (void)fastGlobalSmootherFilter:(nonnull Mat *)guide
                                 src:(nonnull Mat *)src
                                 dst:(nonnull Mat *)dst
                              lambda:(double)lambda
                         sigma_color:(double)sigma_color
                  lambda_attenuation:(double)lambda_attenuation;

    Swift

    class func fastGlobalSmootherFilter(guide: Mat, src: Mat, dst: Mat, lambda: Double, sigma_color: Double, lambda_attenuation: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    dst

    destination image.

    lambda

    parameter defining the amount of regularization

    sigma_color

    parameter, that is similar to color space sigma in bilateralFilter.

    lambda_attenuation

    internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

  • Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

    it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

    Declaration

    Objective-C

    + (void)fastGlobalSmootherFilter:(nonnull Mat *)guide
                                 src:(nonnull Mat *)src
                                 dst:(nonnull Mat *)dst
                              lambda:(double)lambda
                         sigma_color:(double)sigma_color;

    Swift

    class func fastGlobalSmootherFilter(guide: Mat, src: Mat, dst: Mat, lambda: Double, sigma_color: Double)

    Parameters

    guide

    image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

    dst

    destination image.

    lambda

    parameter defining the amount of regularization

    sigma_color

    parameter, that is similar to color space sigma in bilateralFilter.

  • Fourier descriptors for planed closed curves

    For more details about this implementation, please see CITE: PersoonFu1977

    Declaration

    Objective-C

    + (void)fourierDescriptor:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                        nbElt:(int)nbElt
                         nbFD:(int)nbFD;

    Swift

    class func fourierDescriptor(src: Mat, dst: Mat, nbElt: Int32, nbFD: Int32)
  • Fourier descriptors for planed closed curves

    For more details about this implementation, please see CITE: PersoonFu1977

    Declaration

    Objective-C

    + (void)fourierDescriptor:(nonnull Mat *)src
                          dst:(nonnull Mat *)dst
                        nbElt:(int)nbElt;

    Swift

    class func fourierDescriptor(src: Mat, dst: Mat, nbElt: Int32)
  • Fourier descriptors for planed closed curves

    For more details about this implementation, please see CITE: PersoonFu1977

    Declaration

    Objective-C

    + (void)fourierDescriptor:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func fourierDescriptor(src: Mat, dst: Mat)
  • Simple one-line Guided Filter call.

    If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.

    Declaration

    Objective-C

    + (void)guidedFilter:(nonnull Mat *)guide
                     src:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                  radius:(int)radius
                     eps:(double)eps
                  dDepth:(int)dDepth;

    Swift

    class func guidedFilter(guide: Mat, src: Mat, dst: Mat, radius: Int32, eps: Double, dDepth: Int32)

    Parameters

    guide

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

    src

    filtering image with any numbers of channels.

    dst

    output image.

    radius

    radius of Guided Filter.

    eps

    regularization term of Guided Filter.

    {eps}^2
    is similar to the sigma in the color space into bilateralFilter.

    dDepth

    optional depth of the output image.

  • Simple one-line Guided Filter call.

    If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.

    Declaration

    Objective-C

    + (void)guidedFilter:(nonnull Mat *)guide
                     src:(nonnull Mat *)src
                     dst:(nonnull Mat *)dst
                  radius:(int)radius
                     eps:(double)eps;

    Swift

    class func guidedFilter(guide: Mat, src: Mat, dst: Mat, radius: Int32, eps: Double)

    Parameters

    guide

    guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

    src

    filtering image with any numbers of channels.

    dst

    output image.

    radius

    radius of Guided Filter.

    eps

    regularization term of Guided Filter.

    {eps}^2
    is similar to the sigma in the color space into bilateralFilter.

  • Applies the joint bilateral filter to an image.

    Note

    bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors.

    Declaration

    Objective-C

    + (void)jointBilateralFilter:(nonnull Mat *)joint
                             src:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               d:(int)d
                      sigmaColor:(double)sigmaColor
                      sigmaSpace:(double)sigmaSpace
                      borderType:(int)borderType;

    Swift

    class func jointBilateralFilter(joint: Mat, src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double, borderType: Int32)

    Parameters

    joint

    Joint 8-bit or floating-point, 1-channel or 3-channel image.

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.

    dst

    Destination image of the same size and type as src .

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

  • Applies the joint bilateral filter to an image.

    Note

    bilateralFilter and jointBilateralFilter use L1 norm to compute difference between colors.

    Declaration

    Objective-C

    + (void)jointBilateralFilter:(nonnull Mat *)joint
                             src:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               d:(int)d
                      sigmaColor:(double)sigmaColor
                      sigmaSpace:(double)sigmaSpace;

    Swift

    class func jointBilateralFilter(joint: Mat, src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double)

    Parameters

    joint

    Joint 8-bit or floating-point, 1-channel or 3-channel image.

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.

    dst

    Destination image of the same size and type as src .

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

  • Global image smoothing via L0 gradient minimization.

    For more details about L0 Smoother, see the original paper CITE: xu2011image.

    Declaration

    Objective-C

    + (void)l0Smooth:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              lambda:(double)lambda
               kappa:(double)kappa;

    Swift

    class func l0Smooth(src: Mat, dst: Mat, lambda: Double, kappa: Double)

    Parameters

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.

    dst

    destination image.

    lambda

    parameter defining the smooth term weight.

    kappa

    parameter defining the increasing factor of the weight of the gradient data term.

  • Global image smoothing via L0 gradient minimization.

    For more details about L0 Smoother, see the original paper CITE: xu2011image.

    Declaration

    Objective-C

    + (void)l0Smooth:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
              lambda:(double)lambda;

    Swift

    class func l0Smooth(src: Mat, dst: Mat, lambda: Double)

    Parameters

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.

    dst

    destination image.

    lambda

    parameter defining the smooth term weight.

  • Global image smoothing via L0 gradient minimization.

    For more details about L0 Smoother, see the original paper CITE: xu2011image.

    Declaration

    Objective-C

    + (void)l0Smooth:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func l0Smooth(src: Mat, dst: Mat)

    Parameters

    src

    source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.

    dst

    destination image.

  • Performs thresholding on input images using Niblack’s technique or some of the popular variations it inspired.

    The function transforms a grayscale image to a binary image according to the formulae:

    • THRESH_BINARY
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}
    • THRESH_BINARY_INV
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}
      where
      T(x,y)
      is a threshold calculated individually for each pixel.

    The threshold value

    T(x, y)
    is determined based on the binarization method chosen. For classic Niblack, it is the mean minus
    k
    times standard deviation of
    \texttt{blockSize} \times\texttt{blockSize}
    neighborhood of
    (x, y)
    .

    The function can’t process the image in-place.

    See

    threshold, adaptiveThreshold

    Declaration

    Objective-C

    + (void)niBlackThreshold:(nonnull Mat *)_src
                        _dst:(nonnull Mat *)_dst
                    maxValue:(double)maxValue
                        type:(int)type
                   blockSize:(int)blockSize
                           k:(double)k
          binarizationMethod:(LocalBinarizationMethods)binarizationMethod
                           r:(double)r;

    Swift

    class func niBlackThreshold(_src: Mat, _dst: Mat, maxValue: Double, type: Int32, blockSize: Int32, k: Double, binarizationMethod: LocalBinarizationMethods, r: Double)

    Parameters

    _src

    Source 8-bit single-channel image.

    _dst

    Destination image of the same size and the same type as src.

    maxValue

    Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

    type

    Thresholding type, see cv::ThresholdTypes.

    blockSize

    Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

    k

    The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.

    binarizationMethod

    Binarization method to use. By default, Niblack’s technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.

    r

    The user-adjustable parameter used by Sauvola’s technique. This is the dynamic range of standard deviation.

  • Performs thresholding on input images using Niblack’s technique or some of the popular variations it inspired.

    The function transforms a grayscale image to a binary image according to the formulae:

    • THRESH_BINARY
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}
    • THRESH_BINARY_INV
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}
      where
      T(x,y)
      is a threshold calculated individually for each pixel.

    The threshold value

    T(x, y)
    is determined based on the binarization method chosen. For classic Niblack, it is the mean minus
    k
    times standard deviation of
    \texttt{blockSize} \times\texttt{blockSize}
    neighborhood of
    (x, y)
    .

    The function can’t process the image in-place.

    See

    threshold, adaptiveThreshold

    Declaration

    Objective-C

    + (void)niBlackThreshold:(nonnull Mat *)_src
                        _dst:(nonnull Mat *)_dst
                    maxValue:(double)maxValue
                        type:(int)type
                   blockSize:(int)blockSize
                           k:(double)k
          binarizationMethod:(LocalBinarizationMethods)binarizationMethod;

    Swift

    class func niBlackThreshold(_src: Mat, _dst: Mat, maxValue: Double, type: Int32, blockSize: Int32, k: Double, binarizationMethod: LocalBinarizationMethods)

    Parameters

    _src

    Source 8-bit single-channel image.

    _dst

    Destination image of the same size and the same type as src.

    maxValue

    Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

    type

    Thresholding type, see cv::ThresholdTypes.

    blockSize

    Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

    k

    The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.

    binarizationMethod

    Binarization method to use. By default, Niblack’s technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. of standard deviation.

  • Performs thresholding on input images using Niblack’s technique or some of the popular variations it inspired.

    The function transforms a grayscale image to a binary image according to the formulae:

    • THRESH_BINARY
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{\texttt{maxValue}}{if \(src(x,y) > T(x,y)\)}{0}{otherwise}
    • THRESH_BINARY_INV
      \newcommand{\fork}[4]{ \left\{ \begin{array}{l l} #1 & \text{#2}\\ #3 & \text{#4}\\ \end{array} \right.} dst(x,y) = \fork{0}{if \(src(x,y) > T(x,y)\)}{\texttt{maxValue}}{otherwise}
      where
      T(x,y)
      is a threshold calculated individually for each pixel.

    The threshold value

    T(x, y)
    is determined based on the binarization method chosen. For classic Niblack, it is the mean minus
    k
    times standard deviation of
    \texttt{blockSize} \times\texttt{blockSize}
    neighborhood of
    (x, y)
    .

    The function can’t process the image in-place.

    See

    threshold, adaptiveThreshold

    Declaration

    Objective-C

    + (void)niBlackThreshold:(nonnull Mat *)_src
                        _dst:(nonnull Mat *)_dst
                    maxValue:(double)maxValue
                        type:(int)type
                   blockSize:(int)blockSize
                           k:(double)k;

    Swift

    class func niBlackThreshold(_src: Mat, _dst: Mat, maxValue: Double, type: Int32, blockSize: Int32, k: Double)

    Parameters

    _src

    Source 8-bit single-channel image.

    _dst

    Destination image of the same size and the same type as src.

    maxValue

    Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

    type

    Thresholding type, see cv::ThresholdTypes.

    blockSize

    Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

    k

    The user-adjustable parameter used by Niblack and inspired techniques. For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods. of standard deviation.

  • calculates conjugate of a quaternion image.

    Declaration

    Objective-C

    + (void)qconj:(nonnull Mat *)qimg qcimg:(nonnull Mat *)qcimg;

    Swift

    class func qconj(qimg: Mat, qcimg: Mat)
  • Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.

    Declaration

    Objective-C

    + (void)qdft:(nonnull Mat *)img
            qimg:(nonnull Mat *)qimg
           flags:(int)flags
        sideLeft:(BOOL)sideLeft;

    Swift

    class func qdft(img: Mat, qimg: Mat, flags: Int32, sideLeft: Bool)
  • Calculates the per-element quaternion product of two arrays

    Declaration

    Objective-C

    + (void)qmultiply:(nonnull Mat *)src1
                 src2:(nonnull Mat *)src2
                  dst:(nonnull Mat *)dst;

    Swift

    class func qmultiply(src1: Mat, src2: Mat, dst: Mat)
  • divides each element by its modulus.

    Declaration

    Objective-C

    + (void)qunitary:(nonnull Mat *)qimg qnimg:(nonnull Mat *)qnimg;

    Swift

    class func qunitary(qimg: Mat, qnimg: Mat)
  • Applies the rolling guidance filter to an image.

    For more details, please see CITE: zhang2014rolling

    Note

    rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter.

    Declaration

    Objective-C

    + (void)rollingGuidanceFilter:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                                d:(int)d
                       sigmaColor:(double)sigmaColor
                       sigmaSpace:(double)sigmaSpace
                        numOfIter:(int)numOfIter
                       borderType:(int)borderType;

    Swift

    class func rollingGuidanceFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double, numOfIter: Int32, borderType: Int32)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src.

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

    numOfIter

    Number of iterations of joint edge-preserving filtering applied on the source image.

  • Applies the rolling guidance filter to an image.

    For more details, please see CITE: zhang2014rolling

    Note

    rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter.

    Declaration

    Objective-C

    + (void)rollingGuidanceFilter:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                                d:(int)d
                       sigmaColor:(double)sigmaColor
                       sigmaSpace:(double)sigmaSpace
                        numOfIter:(int)numOfIter;

    Swift

    class func rollingGuidanceFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double, numOfIter: Int32)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src.

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

    numOfIter

    Number of iterations of joint edge-preserving filtering applied on the source image.

  • Applies the rolling guidance filter to an image.

    For more details, please see CITE: zhang2014rolling

    Note

    rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter.

    Declaration

    Objective-C

    + (void)rollingGuidanceFilter:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                                d:(int)d
                       sigmaColor:(double)sigmaColor
                       sigmaSpace:(double)sigmaSpace;

    Swift

    class func rollingGuidanceFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double, sigmaSpace: Double)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src.

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    sigmaSpace

    Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

  • Applies the rolling guidance filter to an image.

    For more details, please see CITE: zhang2014rolling

    farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

    Note

    rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter.

    Declaration

    Objective-C

    + (void)rollingGuidanceFilter:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                                d:(int)d
                       sigmaColor:(double)sigmaColor;

    Swift

    class func rollingGuidanceFilter(src: Mat, dst: Mat, d: Int32, sigmaColor: Double)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src.

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

    sigmaColor

    Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

  • Applies the rolling guidance filter to an image.

    For more details, please see CITE: zhang2014rolling

    farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

    Note

    rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter.

    Declaration

    Objective-C

    + (void)rollingGuidanceFilter:(nonnull Mat *)src
                              dst:(nonnull Mat *)dst
                                d:(int)d;

    Swift

    class func rollingGuidanceFilter(src: Mat, dst: Mat, d: Int32)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src.

    d

    Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

  • Applies the rolling guidance filter to an image.

    For more details, please see CITE: zhang2014rolling

    it is computed from sigmaSpace .

    farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

    farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

    Note

    rollingGuidanceFilter uses jointBilateralFilter as the edge-preserving filter.

    Declaration

    Objective-C

    + (void)rollingGuidanceFilter:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func rollingGuidanceFilter(src: Mat, dst: Mat)

    Parameters

    src

    Source 8-bit or floating-point, 1-channel or 3-channel image.

    dst

    Destination image of the same size and type as src.

  • Applies a binary blob thinning operation, to achieve a skeletization of the input image.

    The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.

    Declaration

    Objective-C

    + (void)thinning:(nonnull Mat *)src
                 dst:(nonnull Mat *)dst
        thinningType:(ThinningTypes)thinningType;

    Swift

    class func thinning(src: Mat, dst: Mat, thinningType: ThinningTypes)

    Parameters

    src

    Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.

    dst

    Destination image of the same size and the same type as src. The function can work in-place.

    thinningType

    Value that defines which thinning algorithm should be used. See cv::ximgproc::ThinningTypes

  • Applies a binary blob thinning operation, to achieve a skeletization of the input image.

    The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.

    Declaration

    Objective-C

    + (void)thinning:(nonnull Mat *)src dst:(nonnull Mat *)dst;

    Swift

    class func thinning(src: Mat, dst: Mat)

    Parameters

    src

    Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.

    dst

    Destination image of the same size and the same type as src. The function can work in-place.

  • transform a contour

    Declaration

    Objective-C

    + (void)transformFD:(nonnull Mat *)src
                      t:(nonnull Mat *)t
                    dst:(nonnull Mat *)dst
              fdContour:(BOOL)fdContour;

    Swift

    class func transformFD(src: Mat, t: Mat, dst: Mat, fdContour: Bool)
  • transform a contour

    Declaration

    Objective-C

    + (void)transformFD:(nonnull Mat *)src
                      t:(nonnull Mat *)t
                    dst:(nonnull Mat *)dst;

    Swift

    class func transformFD(src: Mat, t: Mat, dst: Mat)
  • Applies weighted median filter to an image.

    For more details about this implementation, please see CITE: zhang2014100+

    the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling.

    Declaration

    Objective-C

    + (void)weightedMedianFilter:(nonnull Mat *)joint
                             src:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               r:(int)r
                           sigma:(double)sigma
                      weightType:(WMFWeightType)weightType
                            mask:(nonnull Mat *)mask;

    Swift

    class func weightedMedianFilter(joint: Mat, src: Mat, dst: Mat, r: Int32, sigma: Double, weightType: WMFWeightType, mask: Mat)
  • Applies weighted median filter to an image.

    For more details about this implementation, please see CITE: zhang2014100+

    the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling.

    Declaration

    Objective-C

    + (void)weightedMedianFilter:(nonnull Mat *)joint
                             src:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               r:(int)r
                           sigma:(double)sigma
                      weightType:(WMFWeightType)weightType;

    Swift

    class func weightedMedianFilter(joint: Mat, src: Mat, dst: Mat, r: Int32, sigma: Double, weightType: WMFWeightType)
  • Applies weighted median filter to an image.

    For more details about this implementation, please see CITE: zhang2014100+

    the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling.

    Declaration

    Objective-C

    + (void)weightedMedianFilter:(nonnull Mat *)joint
                             src:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               r:(int)r
                           sigma:(double)sigma;

    Swift

    class func weightedMedianFilter(joint: Mat, src: Mat, dst: Mat, r: Int32, sigma: Double)
  • Applies weighted median filter to an image.

    For more details about this implementation, please see CITE: zhang2014100+

    the pixel will be ignored when maintaining the joint-histogram. This is useful for applications like optical flow occlusion handling.

    Declaration

    Objective-C

    + (void)weightedMedianFilter:(nonnull Mat *)joint
                             src:(nonnull Mat *)src
                             dst:(nonnull Mat *)dst
                               r:(int)r;

    Swift

    class func weightedMedianFilter(joint: Mat, src: Mat, dst: Mat, r: Int32)