Cargando…

Bayesian field theory /

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Lemm, Jörg C.
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Baltimore, Md. : Johns Hopkins University Press, 2003.
Temas:
Acceso en línea:Texto completo

MARC

LEADER 00000cam a2200000 a 4500
001 EBSCO_ocm52762436
003 OCoLC
005 20231017213018.0
006 m o d
007 cr cn|||||||||
008 030731s2003 mdua ob 001 0 eng d
040 |a N$T  |b eng  |e pn  |c N$T  |d YDXCP  |d OCLCQ  |d TUU  |d OCLCQ  |d TNF  |d OCLCQ  |d ZCU  |d OCLCO  |d OCLCF  |d OCLCQ  |d NLGGC  |d OCLCQ  |d E7B  |d P@U  |d COO  |d EBLCP  |d DEBSZ  |d OCLCQ  |d AGLDB  |d PIFBR  |d MERUC  |d OCLCQ  |d WY@  |d LUE  |d VTS  |d TOF  |d AU@  |d OCLCQ  |d K6U  |d UKAHL  |d OCLCO  |d OCLCQ 
019 |a 847594358  |a 965353906  |a 992053615 
020 |a 0801877970  |q (electronic bk.) 
020 |a 9780801877971  |q (electronic bk.) 
020 |z 0801872200  |q (alk. paper) 
020 |z 9780801872204 
029 1 |a DEBBG  |b BV043094086 
029 1 |a DEBSZ  |b 422438480 
029 1 |a DEBSZ  |b 44971523X 
035 |a (OCoLC)52762436  |z (OCoLC)847594358  |z (OCoLC)965353906  |z (OCoLC)992053615 
050 4 |a QC174.85.B38  |b L45 2003eb 
072 7 |a SCI  |x 040000  |2 bisacsh 
082 0 4 |a 530.15/95  |2 22 
049 |a UAMI 
100 1 |a Lemm, Jörg C. 
245 1 0 |a Bayesian field theory /  |c Jörg C. Lemm. 
260 |a Baltimore, Md. :  |b Johns Hopkins University Press,  |c 2003. 
300 |a 1 online resource (xix, 411 pages) :  |b illustrations 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
504 |a Includes bibliographical references (pages 365-402) and index. 
588 0 |a Print version record. 
505 0 |a Cover; Contents; List of Figures; List of Tables; List of Numerical Case Studies; Acknowledgments; 1 Introduction; 2 Bayesian framework; 2.1 Bayesian models; 2.1.1 Independent, dependent, and hidden variables; 2.1.2 Energies, free energies, and errors; 2.1.3 Bayes' theorem: Posterior, prior, and likelihood; 2.1.4 Predictive density and learning; 2.1.5 Mutual information and learning; 2.1.6 Maximum A Posteriori Approximation (MAP); 2.1.7 Normalization, non-negativity, and specific priors; 2.1.8 Numerical case study: A fair coin?; 2.2 Bayesian decision theory; 2.2.1 Loss and risk. 
505 8 |a 2.2.2 Loss functions for approximation2.2.3 General loss functions and unsupervised learning; 2.2.4 Log-loss and Maximum A Posteriori Approximation; 2.2.5 Empirical risk minimization; 2.2.6 Interpretations of Occam's razor; 2.2.7 Approaches to empirical learning; 2.3 A priori information; 2.3.1 Controlled, measured, and structural priors; 2.3.2 Noise induced priors; 3 Gaussian prior factors; 3.1 Gaussian prior factor for log-likelihoods; 3.1.1 Lagrange multipliers: Error functional E(L); 3.1.2 Normalization by parameterization: Error functional E(g); 3.1.3 The Hessians H[sub(L)], H[sub(g)]. 
505 8 |a 3.2 Gaussian prior factor for likelihoods3.2.1 Lagrange multipliers: Error functional E(P); 3.2.2 Normalization by parameterization: Error functional E(z); 3.2.3 The Hessians H[sub(P)], H[sub(z)]; 3.3 Quadratic density estimation and empirical risk minimization; 3.4 Numerical case study: Density estimation with Gaussian specific priors; 3.5 Gaussian prior factors for general field; 3.5.1 The general case; 3.5.2 Square root of P; 3.5.3 Distribution functions; 3.6 Covariances and invariances; 3.6.1 Approximate invariance; 3.6.2 Infinitesimal translations; 3.6.3 Approximate periodicity. 
505 8 |a 3.6.4 Approximate fractals3.7 Non-zero means; 3.8 Regression; 3.8.1 Gaussian regression; 3.8.2 Exact predictive density; 3.8.3 Gaussian mixture regression (cluster regression); 3.8.4 Support vector machines and regression; 3.8.5 Numerical case study: Approximately invariant regression (AIR); 3.9 Classification; 4 Parameterizing likelihoods: Variational methods; 4.1 General likelihood parameterizations; 4.2 Gaussian priors for likelihood parameters; 4.3 Linear trial spaces; 4.4 Linear regression; 4.5 Mixture models; 4.6 Additive models; 4.7 Product ansatz; 4.8 Decision trees. 
505 8 |a 4.9 Projection pursuit4.10 Neural networks; 5 Parameterizing priors: Hyperparameters; 5.1 Quenched and annealed prior normalization; 5.2 Saddle point approximations and hyperparameters; 5.2.1 Joint MAP; 5.2.2 Stepwise MAP; 5.2.3 Pointwise approximation; 5.2.4 Marginal posterior and empirical Bayes; 5.2.5 Some variants of stationarity equations; 5.3 Adapting prior, means; 5.3.1 General considerations; 5.3.2 Density estimation and nonparametric boosting; 5.3.3 Unrestricted variation; 5.3.4 Regression; 5.4 Adapting prior covariances; 5.4.1 General case; 5.4.2 Automatic relevance detection. 
590 |a eBooks on EBSCOhost  |b EBSCO eBook Subscription Academic Collection - Worldwide 
650 0 |a Bayesian field theory. 
650 6 |a Théorie des champs bayésienne. 
650 7 |a SCIENCE  |x Physics  |x Mathematical & Computational.  |2 bisacsh 
650 7 |a Bayesian field theory.  |2 fast  |0 (OCoLC)fst00829018 
776 0 8 |i Print version:  |a Lemm, Jörg C.  |t Bayesian field theory.  |d Baltimore, Md. : Johns Hopkins University Press, 2003  |z 0801872200  |w (DLC) 2002073958  |w (OCoLC)50184931 
856 4 0 |u https://ebsco.uam.elogim.com/login.aspx?direct=true&scope=site&db=nlebk&AN=79367  |z Texto completo 
938 |a Askews and Holts Library Services  |b ASKH  |n AH14709956 
938 |a ProQuest Ebook Central  |b EBLB  |n EBL3318656 
938 |a EBSCOhost  |b EBSC  |n 79367 
938 |a Project MUSE  |b MUSE  |n muse20140 
938 |a YBP Library Services  |b YANK  |n 2341125 
994 |a 92  |b IZTAP