Cargando…

Neural Networks: Tricks of the Trade

The twenty last years have been marked by an increase in available data and computing power. In parallel to this trend, the focus of neural network research and the practice of training neural networks has undergone a number of important changes, for example, use of deep learning machines. The secon...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor Corporativo: SpringerLink (Online service)
Otros Autores: Montavon, Grégoire (Editor ), Orr, Geneviève (Editor ), Müller, Klaus-Robert (Editor )
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Berlin, Heidelberg : Springer Berlin Heidelberg : Imprint: Springer, 2012.
Edición:2nd ed. 2012.
Colección:Theoretical Computer Science and General Issues, 7700
Temas:
Acceso en línea:Texto Completo

MARC

LEADER 00000nam a22000005i 4500
001 978-3-642-35289-8
003 DE-He213
005 20230329201240.0
007 cr nn 008mamaa
008 121116s2012 gw | s |||| 0|eng d
020 |a 9783642352898  |9 978-3-642-35289-8 
024 7 |a 10.1007/978-3-642-35289-8  |2 doi 
050 4 |a QA75.5-76.95 
072 7 |a UYA  |2 bicssc 
072 7 |a COM037000  |2 bisacsh 
072 7 |a UYA  |2 thema 
082 0 4 |a 004.0151  |2 23 
245 1 0 |a Neural Networks: Tricks of the Trade  |h [electronic resource] /  |c edited by Grégoire Montavon, Geneviève Orr, Klaus-Robert Müller. 
250 |a 2nd ed. 2012. 
264 1 |a Berlin, Heidelberg :  |b Springer Berlin Heidelberg :  |b Imprint: Springer,  |c 2012. 
300 |a XII, 769 p. 223 illus.  |b online resource. 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
347 |a text file  |b PDF  |2 rda 
490 1 |a Theoretical Computer Science and General Issues,  |x 2512-2029 ;  |v 7700 
505 0 |a Introduction -- Preface on Speeding Learning -- 1. Efficient BackProp -- Preface on Regularization Techniques to Improve Generalization -- 2. Early Stopping - But When? -- 3. A Simple Trick for Estimating the Weight Decay Parameter -- 4. Controlling the Hyperparameter Search in MacKay's Bayesian Neural Network Framework.- 5. Adaptive Regularization in Neural Network Modeling -- 6. Large Ensemble Averaging -- Preface on Improving Network Models and Algorithmic Tricks -- 7. Square Unit Augmented, Radially Extended, Multilayer Perceptrons -- 8. A Dozen Tricks with Multitask Learning -- 9. Solving the Ill-Conditioning in Neural Network Learning -- 10. Centering Neural Network Gradient Factors -- 11. Avoiding Roundoff Error in Backpropagating Derivatives.- 12. Transformation Invariance in Pattern Recognition -Tangent Distance and Tangent Propagation -- 13. Combining Neural Networks and Context-Driven Search for On-line, Printed Handwriting Recognition in the Newtons -- 14. Neural Network Classification and Prior Class Probabilities -- 15. Applying Divide and Conquer to Large Scale Pattern Recognition Tasks -- Preface on Tricks for Time Series -- 16. Forecasting the Economy with Neural Nets: A Survey of Challenges and Solutions -- 17. How to Train Neural Networks -- Preface on Big Learning in Deep Neural Networks -- 18. Stochastic Gradient Descent Tricks.- 19. Practical Recommendations for Gradient-Based Training of Deep Architectures -- 20. Training Deep and Recurrent Networks with Hessian-Free Optimization -- 21. Implementing Neural Networks Efficiently -- Preface on Better Representations: Invariant, Disentangled and Reusable -- 22. Learning Feature Representations with K-Means -- 23. Deep Big Multilayer Perceptrons for Digit Recognition -- 24. A Practical Guide to Training Restricted Boltzmann Machines -- 25. Deep Boltzmann Machines and the Centering Trick -- 26. Deep Learning via Semi-supervised Embedding -- Preface on Identifying Dynamical Systems for Forecasting and Control -- 27. A Practical Guide to Applying Echo State Networks -- 28. Forecasting with Recurrent Neural Networks: 12 Tricks -- 29. Solving Partially Observable Reinforcement Learning Problems with Recurrent Neural Networks -- 30. 10 Steps and Some Tricks to Set up Neural Reinforcement Controllers. 
520 |a The twenty last years have been marked by an increase in available data and computing power. In parallel to this trend, the focus of neural network research and the practice of training neural networks has undergone a number of important changes, for example, use of deep learning machines. The second edition of the book augments the first edition with more tricks, which have resulted from 14 years of theory and experimentation by some of the world's most prominent neural network researchers. These tricks can make a substantial difference (in terms of speed, ease of implementation, and accuracy) when it comes to putting algorithms to work on real problems. 
650 0 |a Computer science. 
650 0 |a Artificial intelligence. 
650 0 |a Algorithms. 
650 0 |a Pattern recognition systems. 
650 0 |a Dynamics. 
650 0 |a Nonlinear theories. 
650 0 |a Application software. 
650 1 4 |a Theory of Computation. 
650 2 4 |a Artificial Intelligence. 
650 2 4 |a Algorithms. 
650 2 4 |a Automated Pattern Recognition. 
650 2 4 |a Applied Dynamical Systems. 
650 2 4 |a Computer and Information Systems Applications. 
700 1 |a Montavon, Grégoire.  |e editor.  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
700 1 |a Orr, Geneviève.  |e editor.  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
700 1 |a Müller, Klaus-Robert.  |e editor.  |4 edt  |4 http://id.loc.gov/vocabulary/relators/edt 
710 2 |a SpringerLink (Online service) 
773 0 |t Springer Nature eBook 
776 0 8 |i Printed edition:  |z 9783642352904 
776 0 8 |i Printed edition:  |z 9783642352881 
830 0 |a Theoretical Computer Science and General Issues,  |x 2512-2029 ;  |v 7700 
856 4 0 |u https://doi.uam.elogim.com/10.1007/978-3-642-35289-8  |z Texto Completo 
912 |a ZDB-2-SCS 
912 |a ZDB-2-SXCS 
912 |a ZDB-2-LNC 
950 |a Computer Science (SpringerNature-11645) 
950 |a Computer Science (R0) (SpringerNature-43710)