Cargando…

Mixtures : estimation and applications /

This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a compl...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Otros Autores: Mengersen, Kerrie L., Robert, Christian P., 1961-, Titterington, D. M.
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Hoboken, N.J. : Wiley, 2011.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Machine generated contents note: 1. The EM algorithm, variational approximations and expectation propagation for mixtures / D. Michael Titterington
  • 1.1. Preamble
  • 1.2. The EM algorithm
  • 1.2.1. Introduction to the algorithm
  • 1.2.2. The E-step and the M-step for the mixing weights
  • 1.2.3. The M-step for mixtures of univariate Gaussian distributions
  • 1.2.4. M-step for mixtures of regular exponential family distributions formulated in terms of the natural parameters
  • 1.2.5. Application to other mixtures
  • 1.2.6. EM as a double expectation
  • 1.3. Variational approximations
  • 1.3.1. Preamble
  • 1.3.2. Introduction to variational approximations
  • 1.3.3. Application of variational Bayes to mixture problems
  • 1.3.4. Application to other mixture problems
  • 1.3.5. Recursive variational approximations
  • 1.3.6. Asymptotic results
  • 1.4. Expectation-propagation
  • 1.4.1. Introduction
  • 1.4.2. Overview of the recursive approach to be adopted.
  • 1.4.3. Finite Gaussian mixtures with an unknown mean parameter
  • 1.4.4. Mixture of two known distributions
  • 1.4.5. Discussion
  • Acknowledgements
  • References
  • 2. Online expectation maximisation / Olivier Cappe
  • 2.1. Introduction
  • 2.2. Model and assumptions
  • 2.3. The EM algorithm and the limiting EM recursion
  • 2.3.1. The batch EM algorithm
  • 2.3.2. The limiting EM recursion
  • 2.3.3. Limitations of batch EM for long data records
  • 2.4. Online expectation maximisation
  • 2.4.1. The algorithm
  • 2.4.2. Convergence properties
  • 2.4.3. Application to finite mixtures
  • 2.4.4. Use for batch maximum-likelihood estimation
  • 2.5. Discussion
  • References
  • 3. The limiting distribution of the EM test of the order of a finite mixture / Pengfei Li
  • 3.1. Introduction
  • 3.2. The method and theory of the EM test
  • 3.2.1. The definition of the EM test statistic
  • 3.2.2. The limiting distribution of the EM test statistic
  • 3.3. Proofs.
  • 3.4. Discussion
  • References
  • 4. Comparing Wald and likelihood regions applied to locally identifiable mixture models / Bruce G. Lindsay
  • 4.1. Introduction
  • 4.2. Background on likelihood confidence regions
  • 4.2.1. Likelihood regions
  • 4.2.2. Profile likelihood regions
  • 4.2.3. Alternative methods
  • 4.3. Background on simulation and visualisation of the likelihood regions
  • 4.3.1. Modal simulation method
  • 4.3.2. Illustrative example
  • 4.4. Comparison between the likelihood regions and the Wald regions
  • 4.4.1. Volume/volume error of the confidence regions
  • 4.4.2. Differences in univariate intervals via worst case analysis
  • 4.4.3. Illustrative example (revisited)
  • 4.5. Application to a finite mixture model
  • 4.5.1. Nonidentifiabilities and likelihood regions for the mixture parameters
  • 4.5.2. Mixture likelihood region simulation and visualisation
  • 4.5.3. Adequacy of using the Wald confidence region.
  • 4.6. Data analysis
  • 4.7. Discussion
  • References
  • 5. Mixture of experts modelling with social science applications / Thomas Brendan Murphy
  • 5.1. Introduction
  • 5.2. Motivating examples
  • 5.2.1. Voting blocs
  • 5.2.2. Social and organisational structure
  • 5.3. Mixture models
  • 5.4. Mixture of experts models
  • 5.5. A mixture of experts model for ranked preference data
  • 5.5.1. Examining the clustering structure
  • 5.6. A mixture of experts latent position cluster model
  • 5.7. Discussion
  • Acknowledgements
  • References
  • 6. Modelling conditional densities using finite smooth mixtures / Robert Kohn
  • 6.1. Introduction
  • 6.2. The model and prior
  • 6.2.1. Smooth mixtures
  • 6.2.2. The component models
  • 6.2.3. The prior
  • 6.3. Inference methodology
  • 6.3.1. The general MCMC scheme
  • 6.3.2. Updating & beta; and I using variable-dimension finite-step Newton proposals
  • 6.3.3. Model comparison
  • 6.4. Applications
  • 6.4.1. A small simulation study.
  • 6.4.2. LIDAR data
  • 6.4.3. Electricity expenditure data
  • 6.5. Conclusions
  • Acknowledgements
  • Appendix: Implementation details for the gamma and log-normal models
  • References
  • 7. Nonparametric mixed membership modelling using the IBP compound Dirichlet process / David M. Blei
  • 7.1. Introduction
  • 7.2. Mixed membership models
  • 7.2.1. Latent Dirichlet allocation
  • 7.2.2. Nonparametric mixed membership models
  • 7.3. Motivation
  • 7.4. Decorrelating prevalence and proportion
  • 7.4.1. Indian buffet process
  • 7.4.2. The IBP compound Dirichlet process
  • 7.4.3. An application of the ICD: focused topic models
  • 7.4.4. Inference
  • 7.5. Related models
  • 7.6. Empirical studies
  • 7.7. Discussion
  • References
  • 8. Discovering nonbinary hierarchical structures with Bayesian rose trees / Katherine A. Heller
  • 8.1. Introduction
  • 8.2. Prior work
  • 8.3. Rose trees, partitions and mixtures
  • 8.4. Avoiding needless cascades
  • 8.4.1. Cluster models.
  • 8.5. Greedy construction of Bayesian rose tree mixtures
  • 8.5.1. Prediction
  • 8.5.2. Hyperparameter optimisation
  • 8.6. Bayesian hierarchical clustering, Dirichlet process models and product partition models
  • 8.6.1. Mixture models and product partition models
  • 8.6.2. PCluster and Bayesian hierarchical clustering
  • 8.7. Results
  • 8.7.1. Optimality of tree structure
  • 8.7.2. Hierarchy likelihoods
  • 8.7.3. Partially observed data
  • 8.7.4. Psychological hierarchies
  • 8.7.5. Hierarchies of Gaussian process experts
  • 8.8. Discussion
  • References
  • 9. Mixtures of factor analysers for the analysis of high-dimensional data / Suren I. Rathnayake
  • 9.1. Introduction
  • 9.2. Single-factor analysis model
  • 9.3. Mixtures of factor analysers
  • 9.4. Mixtures of common factor analysers (MCFA)
  • 9.5. Some related approaches
  • 9.6. Fitting of factor-analytic models
  • 9.7. Choice of the number of factors q
  • 9.8. Example
  • 9.9. Low-dimensional plots via MCFA approach.
  • 9.10. Multivariate t-factor analysers
  • 9.11. Discussion
  • Appendix
  • References
  • 10. Dealing with label switching under model uncertainty / Sylvia Fruhwirth-Schnatter
  • 10.1. Introduction
  • 10.2. Labelling through clustering in the point-process representation
  • 10.2.1. The point-process representation of a finite mixture model
  • 10.2.2. Identification through clustering in the point-process representation
  • 10.3. Identifying mixtures when the number of components is unknown
  • 10.3.1. The role of Dirichlet priors in overfitting mixtures
  • 10.3.2. The meaning of K for overfitting mixtures
  • 10.3.3. The point-process representation of overfitting mixtures
  • 10.3.4. Examples
  • 10.4. Overfitting heterogeneity of component-specific parameters
  • 10.4.1. Overfitting heterogeneity
  • 10.4.2. Using shrinkage priors on the component-specific location parameters
  • 10.5. Concluding remarks
  • References
  • 11. Exact Bayesian analysis of mixtures / Kerrie L. Mengersen.
  • 11.1. Introduction
  • 11.2. Formal derivation of the posterior distribution
  • 11.2.1. Locally conjugate priors
  • 11.2.2. True posterior distributions
  • 11.2.3. Poisson mixture
  • 11.2.4. Multinomial mixtures
  • 11.2.5. Normal mixtures
  • References
  • 12. Manifold MCMC for mixtures / Mark Girolami
  • 12.1. Introduction
  • 12.2. Markov chain Monte Carlo Methods
  • 12.2.1. Metropolis-Hastings
  • 12.2.2. Gibbs sampling
  • 12.2.3. Manifold Metropolis adjusted Langevin algorithm
  • 12.2.4. Manifold Hamiltonian Monte Carlo
  • 12.3. Finite Gaussian mixture models
  • 12.3.1. Gibbs sampler for mixtures of univariate Gaussians
  • 12.3.2. Manifold MCMC for mixtures of univariate Gaussians
  • 12.3.3. Metric tensor
  • 12.3.4. An illustrative example
  • 12.4. Experiments
  • 12.5. Discussion
  • Acknowledgements
  • Appendix
  • References
  • 13. How many components in a finite mixture? / Murray Aitkin
  • 13.1. Introduction
  • 13.2. The galaxy data
  • 13.3. The normal mixture model.
  • 13.4. Bayesian analyses
  • 13.4.1. Escobar and West
  • 13.4.2. Phillips and Smith
  • 13.4.3. Roeder and Wasserman
  • 13.4.4. Richardson and Green
  • 13.4.5. Stephens
  • 13.5. Posterior distributions for K (for flat prior)
  • 13.6. Conclusions from the Bayesian analyses
  • 13.7. Posterior distributions of the model deviances
  • 13.8. Asymptotic distributions
  • 13.9. Posterior deviances for the galaxy data
  • 13.10. Conclusions
  • References
  • 14. Bayesian mixture models: a blood-free dissection of a sheep / Graham E. Gardner
  • 14.1. Introduction
  • 14.2. Mixture models
  • 14.2.1. Hierarchical normal mixture
  • 14.3. Altering dimensions of the mixture model
  • 14.4. Bayesian mixture model incorporating spatial information
  • 14.4.1. Results
  • 14.5. Volume calculation
  • 14.6. Discussion
  • References.