EM
The class implements the Expectation Maximization algorithm.
See
REF: ml_intro_emMember of Ml
-
Declaration
Objective-C
@property (class, readonly) int DEFAULT_NCLUSTERS
Swift
class var DEFAULT_NCLUSTERS: Int32 { get }
-
Declaration
Objective-C
@property (class, readonly) int DEFAULT_MAX_ITERS
Swift
class var DEFAULT_MAX_ITERS: Int32 { get }
-
Declaration
Objective-C
@property (class, readonly) int START_E_STEP
Swift
class var START_E_STEP: Int32 { get }
-
Declaration
Objective-C
@property (class, readonly) int START_M_STEP
Swift
class var START_M_STEP: Int32 { get }
-
Declaration
Objective-C
@property (class, readonly) int START_AUTO_STEP
Swift
class var START_AUTO_STEP: Int32 { get }
-
Creates empty %EM model. The model should be trained then using StatModel::train(traindata, flags) method. Alternatively, you can use one of the EM::train* methods or load it from file using Algorithm::load<EM>(filename).
Declaration
Objective-C
+ (nonnull EM *)create;
Swift
class func create() -> EM
-
Loads and creates a serialized EM from a file
Use EM::save to serialize and store an EM to disk. Load the EM from this file again, by calling this function with the path to the file. Optionally specify the node for the file containing the classifier
Declaration
Objective-C
+ (nonnull EM *)load:(nonnull NSString *)filepath nodeName:(nonnull NSString *)nodeName;
Swift
class func load(filepath: String, nodeName: String) -> EM
Parameters
filepath
path to serialized EM
nodeName
name of node containing the classifier
-
Loads and creates a serialized EM from a file
Use EM::save to serialize and store an EM to disk. Load the EM from this file again, by calling this function with the path to the file. Optionally specify the node for the file containing the classifier
Declaration
Objective-C
+ (nonnull EM *)load:(nonnull NSString *)filepath;
Swift
class func load(filepath: String) -> EM
Parameters
filepath
path to serialized EM
-
Declaration
Objective-C
- (nonnull TermCriteria *)getTermCriteria;
Swift
func getTermCriteria() -> TermCriteria
-
Returns a likelihood logarithm value and an index of the most probable mixture component for the given sample.
Declaration
Parameters
sample
A sample for classification. It should be a one-channel matrix of
1 \times dimsordims \times 1size.probs
Optional output matrix that contains posterior probabilities of each component given the sample. It has
1 \times nclusterssize and CV_64FC1 type.The method returns a two-element double vector. Zero element is a likelihood logarithm value for the sample. First element is an index of the most probable mixture component for the given sample.
-
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. You need to provide initial means `$$a_k$$` of mixture components. Optionally you can pass initial weights `$$\pi_k$$` and covariance matrices `$$S_k$$` of mixture components.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0
Initial means
a_kof mixture components. It is a one-channel matrix ofnclusters \times dimssize. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.covs0
The vector of initial covariance matrices
S_kof mixture components. Each of covariance matrices is a one-channel matrix ofdims \times dimssize. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing.weights0
Initial weights
\pi_kof mixture components. It should be a one-channel floating-point matrix with1 \times nclustersornclusters \times 1size.logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.labels
The optional output “class label” for each sample:
\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type.probs
The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has
nsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. You need to provide initial means `$$a_k$$` of mixture components. Optionally you can pass initial weights `$$\pi_k$$` and covariance matrices `$$S_k$$` of mixture components.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0
Initial means
a_kof mixture components. It is a one-channel matrix ofnclusters \times dimssize. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.covs0
The vector of initial covariance matrices
S_kof mixture components. Each of covariance matrices is a one-channel matrix ofdims \times dimssize. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing.weights0
Initial weights
\pi_kof mixture components. It should be a one-channel floating-point matrix with1 \times nclustersornclusters \times 1size.logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.labels
The optional output “class label” for each sample:
\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. You need to provide initial means `$$a_k$$` of mixture components. Optionally you can pass initial weights `$$\pi_k$$` and covariance matrices `$$S_k$$` of mixture components.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0
Initial means
a_kof mixture components. It is a one-channel matrix ofnclusters \times dimssize. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.covs0
The vector of initial covariance matrices
S_kof mixture components. Each of covariance matrices is a one-channel matrix ofdims \times dimssize. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing.weights0
Initial weights
\pi_kof mixture components. It should be a one-channel floating-point matrix with1 \times nclustersornclusters \times 1size.logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. You need to provide initial means `$$a_k$$` of mixture components. Optionally you can pass initial weights `$$\pi_k$$` and covariance matrices `$$S_k$$` of mixture components.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0
Initial means
a_kof mixture components. It is a one-channel matrix ofnclusters \times dimssize. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.covs0
The vector of initial covariance matrices
S_kof mixture components. Each of covariance matrices is a one-channel matrix ofdims \times dimssize. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing.weights0
Initial weights
\pi_kof mixture components. It should be a one-channel floating-point matrix with1 \times nclustersornclusters \times 1size. each sample. It hasnsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. You need to provide initial means `$$a_k$$` of mixture components. Optionally you can pass initial weights `$$\pi_k$$` and covariance matrices `$$S_k$$` of mixture components.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0
Initial means
a_kof mixture components. It is a one-channel matrix ofnclusters \times dimssize. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.covs0
The vector of initial covariance matrices
S_kof mixture components. Each of covariance matrices is a one-channel matrix ofdims \times dimssize. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing. floating-point matrix with1 \times nclustersornclusters \times 1size. each sample. It hasnsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. You need to provide initial means `$$a_k$$` of mixture components. Optionally you can pass initial weights `$$\pi_k$$` and covariance matrices `$$S_k$$` of mixture components.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
means0
Initial means
a_kof mixture components. It is a one-channel matrix ofnclusters \times dimssize. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. covariance matrices is a one-channel matrix ofdims \times dimssize. If the matrices do not have CV_64F type they will be converted to the inner matrices of such type for the further computing. floating-point matrix with1 \times nclustersornclusters \times 1size. each sample. It hasnsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, %EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the *Maximum Likelihood Estimate* of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure: `$$p_{i,k}$$` in probs, `$$a_k$$` in means , `$$S_k$$` in covs[k], `$$\pi_k$$` in weights , and optionally computes the output "class label" for each sample: `$$\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N$$` (indices of the most probable mixture component for each sample). The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the NormalBayesClassifier.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.labels
The optional output “class label” for each sample:
\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type.probs
The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has
nsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, %EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the *Maximum Likelihood Estimate* of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure: `$$p_{i,k}$$` in probs, `$$a_k$$` in means , `$$S_k$$` in covs[k], `$$\pi_k$$` in weights , and optionally computes the output "class label" for each sample: `$$\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N$$` (indices of the most probable mixture component for each sample). The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the NormalBayesClassifier.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.labels
The optional output “class label” for each sample:
\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, %EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the *Maximum Likelihood Estimate* of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure: `$$p_{i,k}$$` in probs, `$$a_k$$` in means , `$$S_k$$` in covs[k], `$$\pi_k$$` in weights , and optionally computes the output "class label" for each sample: `$$\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N$$` (indices of the most probable mixture component for each sample). The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the NormalBayesClassifier.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Expectation step. Initial values of the model parameters will be estimated by the k-means algorithm. Unlike many of the ML models, %EM is an unsupervised learning algorithm and it does not take responses (class labels or function values) as input. Instead, it computes the *Maximum Likelihood Estimate* of the Gaussian mixture parameters from an input sample set, stores all the parameters inside the structure: `$$p_{i,k}$$` in probs, `$$a_k$$` in means , `$$S_k$$` in covs[k], `$$\pi_k$$` in weights , and optionally computes the output "class label" for each sample: `$$\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N$$` (indices of the most probable mixture component for each sample). The trained model can be used further for prediction, just like any other classifier. The trained model is similar to the NormalBayesClassifier.
Declaration
Objective-C
- (BOOL)trainEM:(nonnull Mat *)samples;
Swift
func trainEM(samples: Mat) -> Bool
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing. each sample. It has
nsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Maximization step. You need to provide initial probabilities `$$p_{i,k}$$` to use this option.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
probs0
the probabilities
logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.labels
The optional output “class label” for each sample:
\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type.probs
The optional output matrix that contains posterior probabilities of each Gaussian mixture component given the each sample. It has
nsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Maximization step. You need to provide initial probabilities `$$p_{i,k}$$` to use this option.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
probs0
the probabilities
logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.labels
The optional output “class label” for each sample:
\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Maximization step. You need to provide initial probabilities `$$p_{i,k}$$` to use this option.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
probs0
the probabilities
logLikelihoods
The optional output matrix that contains a likelihood logarithm value for each sample. It has
nsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Estimate the Gaussian mixture parameters from a samples set.
This variation starts with Maximization step. You need to provide initial probabilities `$$p_{i,k}$$` to use this option.
Declaration
Parameters
samples
Samples from which the Gaussian mixture model will be estimated. It should be a one-channel matrix, each row of which is a sample. If the matrix does not have CV_64F type it will be converted to the inner matrix of such type for the further computing.
probs0
the probabilities each sample. It has
nsamples \times 1size and CV_64FC1 type.\texttt{labels}_i=\texttt{arg max}_k(p_{i,k}), i=1..N(indices of the most probable mixture component for each sample). It hasnsamples \times 1size and CV_32SC1 type. mixture component given the each sample. It hasnsamples \times nclusterssize and CV_64FC1 type. -
Returns posterior probabilities for the provided samples
Declaration
Parameters
samples
The input samples, floating-point matrix
results
The optional output
nSamples \times nClustersmatrix of results. It contains posterior probabilities for each sample from the inputflags
This parameter will be ignored
-
Returns posterior probabilities for the provided samples
Declaration
Parameters
samples
The input samples, floating-point matrix
results
The optional output
nSamples \times nClustersmatrix of results. It contains posterior probabilities for each sample from the input -
Declaration
Objective-C
- (int)getClustersNumber;
Swift
func getClustersNumber() -> Int32
-
Declaration
Objective-C
- (int)getCovarianceMatrixType;
Swift
func getCovarianceMatrixType() -> Int32
-
Returns covariation matrices
Returns vector of covariation matrices. Number of matrices is the number of gaussian mixtures, each matrix is a square floating-point matrix NxN, where N is the space dimensionality.
Declaration
Objective-C
- (void)getCovs:(nonnull NSMutableArray<Mat *> *)covs;
Swift
func getCovs(covs: NSMutableArray)
-
getClustersNumber - see:
-getClustersNumber:
Declaration
Objective-C
- (void)setClustersNumber:(int)val;
Swift
func setClustersNumber(val: Int32)
-
getCovarianceMatrixType - see:
-getCovarianceMatrixType:
Declaration
Objective-C
- (void)setCovarianceMatrixType:(int)val;
Swift
func setCovarianceMatrixType(val: Int32)
-
getTermCriteria - see:
-getTermCriteria:
Declaration
Objective-C
- (void)setTermCriteria:(nonnull TermCriteria *)val;
Swift
func setTermCriteria(val: TermCriteria)