CascadeClassifier

Objective-C

@interface CascadeClassifier : NSObject

Swift

class CascadeClassifier : NSObject

Cascade classifier class for object detection.

Member of Objdetect

Methods

  • Loads a classifier from a file.

    Declaration

    Objective-C

    - (nonnull instancetype)initWithFilename:(nonnull NSString *)filename;

    Swift

    init(filename: String)

    Parameters

    filename

    Name of the file from which the classifier is loaded.

  • Declaration

    Objective-C

    - (instancetype)init;

    Swift

    init()
  • Declaration

    Objective-C

    - (Size2i*)getOriginalWindowSize NS_SWIFT_NAME(getOriginalWindowSize());

    Swift

    func getOriginalWindowSize() -> Size2i
  • Declaration

    Objective-C

    + (BOOL)convert:(NSString*)oldcascade newcascade:(NSString*)newcascade NS_SWIFT_NAME(convert(oldcascade:newcascade:));

    Swift

    class func convert(oldcascade: String, newcascade: String) -> Bool
  • Checks whether the classifier has been loaded.

    Declaration

    Objective-C

    - (BOOL)empty;

    Swift

    func empty() -> Bool
  • Declaration

    Objective-C

    - (BOOL)isOldFormatCascade NS_SWIFT_NAME(isOldFormatCascade());

    Swift

    func isOldFormatCascade() -> Bool
  • Loads a classifier from a file.

    Declaration

    Objective-C

    - (BOOL)load:(nonnull NSString *)filename;

    Swift

    func load(filename: String) -> Bool

    Parameters

    filename

    Name of the file from which the classifier is loaded. The file may contain an old HAAR classifier trained by the haartraining application or a new cascade classifier trained by the traincascade application.

  • Declaration

    Objective-C

    - (int)getFeatureType NS_SWIFT_NAME(getFeatureType());

    Swift

    func getFeatureType() -> Int32
  • Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

    Declaration

    Objective-C

    - (void)detectMultiScale:(nonnull Mat *)image
                     objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 scaleFactor:(double)scaleFactor
                minNeighbors:(int)minNeighbors
                       flags:(int)flags
                     minSize:(nonnull Size2i *)minSize
                     maxSize:(nonnull Size2i *)maxSize;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i, maxSize: Size2i)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it.

    flags

    Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.

    minSize

    Minimum possible object size. Objects smaller than that are ignored.

    maxSize

    Maximum possible object size. Objects larger than that are ignored. If maxSize == minSize model is evaluated on single scale.

    The function is parallelized with the TBB library.

    @note - (Python) A face detection example using cascade classifiers can be found at opencv_source_code/samples/python/facedetect.py

  • Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

    Declaration

    Objective-C

    - (void)detectMultiScale:(nonnull Mat *)image
                     objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 scaleFactor:(double)scaleFactor
                minNeighbors:(int)minNeighbors
                       flags:(int)flags
                     minSize:(nonnull Size2i *)minSize;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it.

    flags

    Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.

    minSize

    Minimum possible object size. Objects smaller than that are ignored.

    The function is parallelized with the TBB library.

    @note - (Python) A face detection example using cascade classifiers can be found at opencv_source_code/samples/python/facedetect.py

  • Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

    Declaration

    Objective-C

    - (void)detectMultiScale:(nonnull Mat *)image
                     objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 scaleFactor:(double)scaleFactor
                minNeighbors:(int)minNeighbors
                       flags:(int)flags;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, scaleFactor: Double, minNeighbors: Int32, flags: Int32)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it.

    flags

    Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.

    The function is parallelized with the TBB library.

    @note - (Python) A face detection example using cascade classifiers can be found at opencv_source_code/samples/python/facedetect.py

  • Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

    Declaration

    Objective-C

    - (void)detectMultiScale:(nonnull Mat *)image
                     objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 scaleFactor:(double)scaleFactor
                minNeighbors:(int)minNeighbors;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, scaleFactor: Double, minNeighbors: Int32)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it. cvHaarDetectObjects. It is not used for a new cascade.

    The function is parallelized with the TBB library.

    @note - (Python) A face detection example using cascade classifiers can be found at opencv_source_code/samples/python/facedetect.py

  • Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

    Declaration

    Objective-C

    - (void)detectMultiScale:(nonnull Mat *)image
                     objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 scaleFactor:(double)scaleFactor;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, scaleFactor: Double)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale. to retain it. cvHaarDetectObjects. It is not used for a new cascade.

    The function is parallelized with the TBB library.

    @note - (Python) A face detection example using cascade classifiers can be found at opencv_source_code/samples/python/facedetect.py

  • Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.

    Declaration

    Objective-C

    - (void)detectMultiScale:(nonnull Mat *)image
                     objects:(nonnull NSMutableArray<Rect2i *> *)objects;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image. to retain it. cvHaarDetectObjects. It is not used for a new cascade.

    The function is parallelized with the TBB library.

    @note - (Python) A face detection example using cascade classifiers can be found at opencv_source_code/samples/python/facedetect.py

  • Declaration

    Objective-C

    - (void)detectMultiScale2:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                numDetections:(nonnull IntVector *)numDetections
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags
                      minSize:(nonnull Size2i *)minSize
                      maxSize:(nonnull Size2i *)maxSize;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, numDetections: IntVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i, maxSize: Size2i)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    numDetections

    Vector of detection numbers for the corresponding objects. An object’s number of detections is the number of neighboring positively classified rectangles that were joined together to form the object.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it.

    flags

    Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.

    minSize

    Minimum possible object size. Objects smaller than that are ignored.

    maxSize

    Maximum possible object size. Objects larger than that are ignored. If maxSize == minSize model is evaluated on single scale.

  • Declaration

    Objective-C

    - (void)detectMultiScale2:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                numDetections:(nonnull IntVector *)numDetections
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags
                      minSize:(nonnull Size2i *)minSize;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, numDetections: IntVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    numDetections

    Vector of detection numbers for the corresponding objects. An object’s number of detections is the number of neighboring positively classified rectangles that were joined together to form the object.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it.

    flags

    Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.

    minSize

    Minimum possible object size. Objects smaller than that are ignored.

  • Declaration

    Objective-C

    - (void)detectMultiScale2:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                numDetections:(nonnull IntVector *)numDetections
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, numDetections: IntVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    numDetections

    Vector of detection numbers for the corresponding objects. An object’s number of detections is the number of neighboring positively classified rectangles that were joined together to form the object.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it.

    flags

    Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.

  • Declaration

    Objective-C

    - (void)detectMultiScale2:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                numDetections:(nonnull IntVector *)numDetections
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, numDetections: IntVector, scaleFactor: Double, minNeighbors: Int32)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    numDetections

    Vector of detection numbers for the corresponding objects. An object’s number of detections is the number of neighboring positively classified rectangles that were joined together to form the object.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale.

    minNeighbors

    Parameter specifying how many neighbors each candidate rectangle should have to retain it. cvHaarDetectObjects. It is not used for a new cascade.

  • Declaration

    Objective-C

    - (void)detectMultiScale2:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                numDetections:(nonnull IntVector *)numDetections
                  scaleFactor:(double)scaleFactor;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, numDetections: IntVector, scaleFactor: Double)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    numDetections

    Vector of detection numbers for the corresponding objects. An object’s number of detections is the number of neighboring positively classified rectangles that were joined together to form the object.

    scaleFactor

    Parameter specifying how much the image size is reduced at each image scale. to retain it. cvHaarDetectObjects. It is not used for a new cascade.

  • Declaration

    Objective-C

    - (void)detectMultiScale2:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                numDetections:(nonnull IntVector *)numDetections;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, numDetections: IntVector)

    Parameters

    image

    Matrix of the type CV_8U containing an image where objects are detected.

    objects

    Vector of rectangles where each rectangle contains the detected object, the rectangles may be partially outside the original image.

    numDetections

    Vector of detection numbers for the corresponding objects. An object’s number of detections is the number of neighboring positively classified rectangles that were joined together to form the object. to retain it. cvHaarDetectObjects. It is not used for a new cascade.

  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags
                      minSize:(nonnull Size2i *)minSize
                      maxSize:(nonnull Size2i *)maxSize
           outputRejectLevels:(BOOL)outputRejectLevels;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i, maxSize: Size2i, outputRejectLevels: Bool)
  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags
                      minSize:(nonnull Size2i *)minSize
                      maxSize:(nonnull Size2i *)maxSize;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i, maxSize: Size2i)
  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags
                      minSize:(nonnull Size2i *)minSize;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32, minSize: Size2i)
  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors
                        flags:(int)flags;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector, scaleFactor: Double, minNeighbors: Int32, flags: Int32)
  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights
                  scaleFactor:(double)scaleFactor
                 minNeighbors:(int)minNeighbors;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector, scaleFactor: Double, minNeighbors: Int32)
  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights
                  scaleFactor:(double)scaleFactor;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector, scaleFactor: Double)
  •  This function allows you to retrieve the final stage decision certainty of classification.
     For this, one needs to set `outputRejectLevels` on true and provide the `rejectLevels` and `levelWeights` parameter.
     For each resulting detection, `levelWeights` will then contain the certainty of classification at the final stage.
     This value can then be used to separate strong from weaker classifications.
    
     A code sample on how to use it efficiently can be found below:
    
     Mat img;
     vector<double> weights;
     vector<int> levels;
     vector<Rect> detections;
     CascadeClassifier model("/path/to/your/model.xml");
     model.detectMultiScale(img, detections, levels, weights, 1.1, 3, 0, Size(), Size(), true);
     cerr << "Detection " << detections[0] << " with weight " << weights[0] << endl;
    

    Declaration

    Objective-C

    - (void)detectMultiScale3:(nonnull Mat *)image
                      objects:(nonnull NSMutableArray<Rect2i *> *)objects
                 rejectLevels:(nonnull IntVector *)rejectLevels
                 levelWeights:(nonnull DoubleVector *)levelWeights;

    Swift

    func detectMultiScale(image: Mat, objects: NSMutableArray, rejectLevels: IntVector, levelWeights: DoubleVector)