Bayesian statistics : an introduction /
"--Presents extensive examples throughout the book to complement the theory presented. Includes significant new material on recent techniques such as variational methods, importance sampling, approximate computation and reversible jump MCMC"--
Clasificación: | Libro Electrónico |
---|---|
Autor principal: | |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
Chichester, West Sussex ; Hoboken, N.J. :
Wiley,
2012.
|
Edición: | 4th ed. |
Temas: | |
Acceso en línea: | Texto completo (Requiere registro previo con correo institucional) |
Tabla de Contenidos:
- Note continued: 7.3. Informative stopping rules
- 7.3.1. An example on capture and recapture of fish
- 7.3.2. Choice of prior and derivation of posterior
- 7.3.3. The maximum likelihood estimator
- 7.3.4. Numerical example
- 7.4. The likelihood principle and reference priors
- 7.4.1. The case of Bernoulli trials and its general implications
- 7.4.2. Conclusion
- 7.5. Bayesian decision theory
- 7.5.1. The elements of game theory
- 7.5.2. Point estimators resulting from quadratic loss
- 7.5.3. Particular cases of quadratic loss
- 7.5.4. Weighted quadratic loss
- 7.5.5. Absolute error loss
- 7.5.6. Zero-one loss
- 7.5.7. General discussion of point estimation
- 7.6. Bayes linear methods
- 7.6.1. Methodology
- 7.6.2. Some simple examples
- 7.6.3. Extensions
- 7.7. Decision theory and hypothesis testing
- 7.7.1. Relationship between decision theory and classical hypothesis testing
- 7.7.2.Composite hypotheses
- 7.8. Empirical Bayes methods
- 7.8.1. Von Mises' example
- 7.8.2. The Poisson case
- 7.9. Exercises on Chapter 7
- 8. Hierarchical models
- 8.1. The idea of a hierarchical model
- 8.1.1. Definition
- 8.1.2. Examples
- 8.1.3. Objectives of a hierarchical analysis
- 8.1.4. More on empirical Bayes methods
- 8.2. The hierarchical normal model
- 8.2.1. The model
- 8.2.2. The Bayesian analysis for known overall mean
- 8.2.3. The empirical Bayes approach
- 8.3. The baseball example
- 8.4. The Stein estimator
- 8.4.1. Evaluation of the risk of the James-Stein estimator
- 8.5. Bayesian analysis for an unknown overall mean
- 8.5.1. Derivation of the posterior
- 8.6. The general linear model revisited
- 8.6.1. An informative prior for the general linear model
- 8.6.2. Ridge regression
- 8.6.3.A further stage to the general linear model
- 8.6.4. The one way model
- 8.6.5. Posterior variances of the estimators
- 8.7. Exercises on Chapter 8
- 9. The Gibbs sampler and other numerical methods
- 9.1. Introduction to numerical methods
- 9.1.1. Monte Carlo methods
- 9.1.2. Markov chains
- 9.2. The EM algorithm
- 9.2.1. The idea of the EM algorithm
- 9.2.2. Why the EM algorithm works
- 9.2.3. Semi-conjugate prior with a normal likelihood
- 9.2.4. The EM algorithm for the hierarchical normal model
- 9.2.5.A particular case of the hierarchical normal model
- 9.3. Data augmentation by Monte Carlo
- 9.3.1. The genetic linkage example revisited
- 9.3.2. Use of R
- 9.3.3. The genetic linkage example in R
- 9.3.4. Other possible uses for data augmentation
- 9.4. The Gibbs sampler
- 9.4.1. Chained data augmentation
- 9.4.2. An example with observed data
- 9.4.3. More on the semi-conjugate prior with a normal likelihood
- 9.4.4. The Gibbs sampler as an extension of chained data augmentation
- 9.4.5. An application to change-point analysis
- 9.4.6. Other uses of the Gibbs sampler
- 9.4.7. More about convergence
- 9.5. Rejection sampling
- 9.5.1. Description
- 9.5.2. Example
- 9.5.3. Rejection sampling for log-concave distributions
- 9.5.4.A practical example
- 9.6. The Metropolis-Hastings algorithm
- 9.6.1. Finding an invariant distribution
- 9.6.2. The Metropolis-Hastings algorithm
- 9.6.3. Choice of a candidate density
- 9.6.4. Example
- 9.6.5. More realistic examples
- 9.6.6. Gibbs as a special case of Metropolis-Hastings
- 9.6.7. Metropolis within Gibbs
- 9.7. Introduction to WinBUGS and OpenBUGS
- 9.7.1. Information about WinBUGS and OpenBUGS
- 9.7.2. Distributions in WinBUGS and OpenBUGS
- 9.7.3.A simple example using WinBUGS
- 9.7.4. The pump failure example revisited
- 9.7.5. DoodleBUGS
- 9.7.6.coda
- 9.7.7.R2WinBUGS and R2OpenBUGS
- 9.8. Generalized linear models
- 9.8.1. Logistic regression
- 9.8.2.A general framework
- 9.9. Exercises on Chapter 9
- 10. Some approximate methods
- 10.1. Bayesian importance sampling
- 10.1.1. Importance sampling to find HDRs
- 10.1.2. Sampling importance re-sampling
- 10.1.3. Multidimensional applications
- 10.2. Variational Bayesian methods: simple case
- 10.2.1. Independent parameters
- 10.2.2. Application to the normal distribution
- 10.2.3. Updating the mean
- 10.2.4. Updating the variance
- 10.2.5. Iteration
- 10.2.6. Numerical example
- 10.3. Variational Bayesian methods: general case
- 10.3.1.A mixture of multivariate normals
- 10.4. ABC: Approximate Bayesian Computation
- 10.4.1. The ABC rejection algorithm
- 10.4.2. The genetic linkage example
- 10.4.3. The ABC Markov Chain Monte Carlo algorithm
- 10.4.4. The ABC Sequential Monte Carlo algorithm
- 10.4.5. The ABC local linear regression algorithm
- 10.4.6. Other variants of ABC
- 10.5. Reversible jump Markov chain Monte Carlo
- 10.5.1. RJMCMC algorithm
- 10.6. Exercises on Chapter 10
- Appendix A Common statistical distributions
- A.1. Normal distribution
- A.2. Chi-squared distribution
- A.3. Normal approximation to chi-squared
- A.4. Gamma distribution
- A.5. Inverse chi-squared distribution
- A.6. Inverse chi distribution
- A.7. Log chi-squared distribution
- A.8. Student's t distribution
- A.9. Normal/chi-squared distribution
- A.10. Beta distribution
- A.11. Binomial distribution
- A.12. Poisson distribution
- A.13. Negative binomial distribution
- A.14. Hypergeometric distribution
- A.15. Uniform distribution
- A.16. Pareto distribution
- A.17. Circular normal distribution
- A.18. Behrens' distribution
- A.19. Snedecor's F distribution
- A.20. Fisher's z distribution
- A.21. Cauchy distribution
- A.22. The probability that one beta variable is greater than another
- A.23. Bivariate normal distribution
- A.24. Multivariate normal distribution
- A.25. Distribution of the correlation coefficient
- Appendix B Tables
- B.1. Percentage points of the Behrens-Fisher distribution
- B.2. Highest density regions for the chi-squared distribution
- B.3. HDRs for the inverse chi-squared distribution
- B.4. Chi-squared corresponding to HDRs for log chi-squared
- B.5. Values of F corresponding to HDRs for log F
- Appendix C R programs
- Appendix D Further reading
- D.1. Robustness
- D.2. Nonparametric methods
- D.3. Multivariate estimation
- D.4. Time series and forecasting
- D.5. Sequential methods
- D.6. Numerical methods
- D.7. Bayesian networks
- D.8. General reading.