出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2016/01/10 11:02:57」(JST)
Machine learning and data mining |
---|
Problems |
|
Supervised learning
(classification • regression) |
|
Clustering |
|
Dimensionality reduction |
|
Structured prediction |
|
Anomaly detection |
|
Neural nets |
|
Theory |
|
Conferences and Journals |
|
|
|
In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.[1] They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. In particular, a very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers[2][3][4] following his collaboration with Per Martin-Löf and Anders Martin-Löf.[5][6][7][8][9][10][11] The Dempster-Laird-Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. Regardless of earlier inventions, the innovative Dempster-Laird-Rubin paper in the Journal of the Royal Statistical Society received an enthusiastic discussion at the Royal Statistical Society meeting with Sundberg calling the paper "brilliant". The Dempster-Laird-Rubin paper established the EM method as an important tool of statistical analysis.
The convergence analysis of the Dempster-Laird-Rubin paper was flawed and a correct convergence analysis was published by C.F. Jeff Wu in 1983.[12] Wu's proof established the EM method's convergence outside of the exponential family, as claimed by Dempster-Laird-Rubin.[12]
The EM algorithm is used to find (locally) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component that each data point belongs to.
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values — viz. the parameters and the latent variables — and simultaneously solving the resulting equations. In statistical models with latent variables, this usually is not possible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
The EM algorithm proceeds from the observation that the following is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work at all, but in fact it can be proven that in this particular context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point.[12] In general there may be multiple maxima, and there is no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e. nonsensical maxima. For example, one of the "solutions" that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
Given a statistical model which generates a set of observed data, a set of unobserved latent data or missing values , and a vector of unknown parameters , along with a likelihood function , the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data
However, this quantity is often intractable (e.g. if is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:
Note that in typical models to which EM is applied:
However, it is possible to apply EM to other sorts of models.
The motivation is as follows. If we know the value of the parameters , we can usually find the value of the latent variables by maximizing the log-likelihood over all possible values of , either simply by iterating over or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables , we can find an estimate of the parameters fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both and are unknown:
The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called hard EM. The k-means algorithm is an example of this class of algorithms.
However, one can do somewhat better: Rather than making a hard choice for given the current parameter values and averaging only over the set of data points associated with a particular value of , one can instead determine the probability of each possible value of for each data point, and then use the probabilities associated with a particular value of to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM. The counts used to compute these weighted averages are called soft counts (as opposed to the hard counts used in a hard-EM-type algorithm such as k-means). The probabilities computed for are posterior probabilities and are what is computed in the E step. The soft counts used to compute new parameter values are what is computed in the M step.
Speaking of an expectation (E) step is a bit of a misnomer. What is calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
Although an EM iteration does increase the observed data (i.e. marginal) likelihood function there is no guarantee that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. There are a variety of heuristic or metaheuristic approaches for escaping a local maximum such as random restart (starting with several different random initial estimates θ(t)), or applying simulated annealing methods.
EM is particularly useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed form updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).[3][4][7][8][9][10][11]
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss–Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
Expectation-maximization works to improve rather than directly improving . Here we show that improvements to the former imply improvements to the latter.[13]
For any with non-zero probability , we can write
We take the expectation over possible values of the unknown data under the current parameter estimate by multiplying both sides by and summing (or integrating) over . The left-hand side is the expectation of a constant, so we get:
where is defined by the negated sum it is replacing. This last equation holds for any value of including ,
and subtracting this last equation from the previous equation gives
However, Gibbs' inequality tells us that , so we can conclude that
In words, choosing to improve beyond can not cause to decrease below , and so the marginal likelihood of the data is non-decreasing.
Under some circumstances, it is convenient to view the EM algorithm as two alternating maximization steps.[14][15] Consider the function:
where q is an arbitrary probability distribution over the unobserved data z, pZ|X(· |x;θ) is the conditional distribution of the unobserved data given the observed data x, H is the entropy and DKL is the Kullback–Leibler divergence.
Then the steps in the EM algorithm may be viewed as:
EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum-Welch algorithm and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.
In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.[ref?]
The EM algorithm (and its faster variant Ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography and single photon emission computed tomography. See below for other faster variants of EM.
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
Filtering and smoothing EM algorithms arise by repeating the following two-step procedure:
Suppose that a Kalman filter or minimum-variance smoother operates on noisy measurements of a single-input-single-output system. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
where are scalar output estimates calculated by a filter or a smoother from N scalar measurements . Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
where and are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
The convergence of parameter estimates such as those above are well studied.[16][17][18]
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton–Raphson techniques.[19] Additionally EM can be used with constrained estimation techniques.
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed.[20]
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which one only seeks an increase in the objective function F for both the E step and M step under the alternative description.[14] GEM is further developed in a distributed environment and shows promising results.[21]
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[22] and therefore use any machinery developed in the more general case.
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm [23] which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM. [24]
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). We may want a fully Bayesian version of this, giving a probability distribution over θ as well as the latent variables. In fact the Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If we use the factorized Q approximation as described above (variational Bayes), we may iterate over each latent variable (now including θ) and optimize them one at a time. There are now k steps per iteration, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
Let be a sample of independent observations from a mixture of two multivariate normal distributions of dimension , and let be the latent variables that determine the component from which the observation originates.[15]
where
The aim is to estimate the unknown parameters representing the "mixing" value between the Gaussians and the means and covariances of each:
where the incomplete-data likelihood function is
and the complete-data likelihood function is
or
where is an indicator function and is the probability density function of a multivariate normal.
To see the last equality, note that for each i all indicators are equal to zero, except for one which is equal to one. The inner sum thus reduces to a single term.
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
These are called the "membership probabilities" which are normally considered the output of the E step (although this is not the Q function of below).
Note that this E step corresponds with the following function for Q:
This does not need to be calculated, because in the M step we only require the terms depending on τ when we maximize for τ, or only the terms depending on μ if we maximize for μ.
The fact that Q(θ|θ(t)) is quadratic in form means that determining the maximizing values of θ is relatively straightforward. Note that τ, (μ1,Σ1) and (μ2,Σ2) may all be maximized independently since they all appear in separate linear terms.
To begin, consider τ, which has the constraint τ1 + τ2=1:
This has the same form as the MLE for the binomial distribution, so
For the next estimates of (μ1,σ1):
This has the same form as a weighted MLE for a normal distribution, so
and, by symmetry
Conclude the iterative process if for below some preset threshold.
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
The EM algorithm has been implemented in the case where there is an underlying linear regression model explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[25] Special cases of this model include censored or truncated observations from a single normal distribution.[25]
EM typically converges to a local optimum--not necessarily the global optimum--and there is no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, there is a need for alternative techniques for guaranteed learning, especially in the high-dimensional setting. There are alternatives to EM with better guarantees in terms of consistency which are known as moment-based approaches or the so-called "spectral techniques". Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, there are no spurious local optima and the true parameters can be consistently estimated under some regularity conditions.
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「EM」「期待値最大化」 |
関連記事 | 「expectation」「maximization」 |
.