$
The density $p(|)$ induced on the training data is obtained by marginalizing the component selector $k$, obtaining \[ p(|) = {k=1}^{K} p( |,), p( |,) = {1}{{(2)^d}} [ -{1}{2} (-)^^{-1}(-) ]. \] Learning a GMM to fit a dataset $X=(, , )$ is usually done by maximizing the log-likelihood of the data:
where $ p$ is the empirical distribution of the data. An algorithm to solve this problem is introduced next.
Learning a GMM by expectation maximization
The direct maximization of the log-likelihood function of a GMM is difficult due to the fact that the assignments of points to Gaussian mode is not observable and, as such, must be treated as a latent variable.
Usually, GMMs are learned by using the *Expectation Maximization* (EM) algorithm [dempster77maximum]}. Consider in general the problem of estimating to the maximum likelihood a distribution $p(x|) = p(x,h|)\,dh$, where $x$ is a measurement, $h$ is a *latent variable*, and $$ are the model parameters. By introducing an auxiliary distribution $q(h|x)$ on the latent variable, one can use Jensen inequality to obtain the following lower bound on the log-likelihood:
The first term of the last expression is the log-likelihood of the model where both the $x$ and $h$ are observed and joinlty distributed as $q(x|h) p(x)$; the second term is the a average entropy of the latent variable, which does not depend on $$. This lower bound is maximized and becomes tight by setting $q(h|x) = p(h|x,)$ to be the posterior distribution on the latent variable $h$ (given the current estimate of the parameters $$). In fact:
\[ E_{x p} p(x|) = E_{(x,h) p(h|x,) p(x)}[ {p(x,h|)}{p(h|x,)} ] = E_{(x,h) p(h|x,) p(x)} [ p(x|) ] = (;X). \]
EM alternates between updating the latent variable auxiliary distribution $q(h|x) = p(h|x,)$ (*expectation step*) given the current estimate of the parameters $$, and then updating the model parameters ${t+1}$ by maximizing the log-likelihood lower bound derived (*maximization step*). The simplification is that in the maximization step both $x$ and $h$ are now ``observed'' quantities. This procedure converges to a local optimum of the model log-likelihood.
Expectation step
In the case of a GMM, the latent variables are the point-to-cluster assignments $k_i, i=1,,n$, one for each of $n$ data points. The auxiliary distribution $q(k_i|) = q_{ik}$ is a matrix with $n K$ entries. Each row $q_{i,:}$ can be thought of as a vector of soft assignments of the data points $$ to each of the Gaussian modes. Setting $q_{ik} = p(k_i | , )$ yields
\[ q_{ik} = { p(|,)} {{l=1}^K p(|,)} \]
where the Gaussian density $p(|,)$ was given above.
One important point to keep in mind when these probabilities are computed is the fact that the Gaussian densities may attain very low values and underflow in a vanilla implementation. Furthermore, VLFeat GMM implementation restricts the covariance matrices to be diagonal. In this case, the computation of the determinant of $$ reduces to computing the trace of the matrix and the inversion of $$ could be obtained by inverting the elements on the diagonal of the covariance matrix.
Maximization step
The M step estimates the parameters of the Gaussian mixture components and the prior probabilities $$ given the auxiliary distribution on the point-to-cluster assignments computed in the E step. Since all the variables are now ``observed'', the estimate is quite simple. For example, the mean $$ of a Gaussian mode is obtained as the mean of the data points assigned to it (accounting for the strength of the soft assignments). The other quantities are obtained in a similar manner, yielding to:
Initialization algorithms
The EM algorithm is a local optimization method. As such, the quality of the solution strongly depends on the quality of the initial values of the parameters (i.e. of the locations and shapes of the Gaussian modes).
gmm.h supports the following cluster initialization algorithms:
- Random data points. (vl_gmm_init_with_rand_data) This method sets the means of the modes by sampling at random a corresponding number of data points, sets the covariance matrices of all the modes are to the covariance of the entire dataset, and sets the prior probabilities of the Gaussian modes to be uniform. This initialization method is the fastest, simplest, as well as the one most likely to end in a bad local minimum.
- KMeans initialization (vl_gmm_init_with_kmeans) This method uses KMeans to pre-cluster the points. It then sets the means and covariances of the Gaussian distributions the sample means and covariances of each KMeans cluster. It also sets the prior probabilities to be proportional to the mass of each cluster. In order to use this initialization method, a user can specify an instance of VlKMeans by using the function vl_gmm_set_kmeans_init_object, or let VlGMM create one automatically.
Alternatively, one can manually specify a starting point (vl_gmm_set_priors, vl_gmm_set_means, vl_gmm_set_covariances).
Definition at line 308 of file gmm.c.