Cargando…

Combining pattern classifiers : methods and algorithms /

"Combined classifiers, which are central to the ubiquitous performance of pattern recognition and machine learning, are generally considered more accurate than single classifiers. In a didactic, detailed assessment, Combining Pattern Classifiers examines the basic theories and tactics of classi...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Kuncheva, Ludmila I. (Ludmila Ilieva), 1959-
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Hoboken, NJ : Wiley, 2014.
Edición:Second edition.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Titlepage
  • Copyright
  • Dedication
  • Preface
  • The Playing Field
  • Software
  • Structure and What is New in the Second Edition
  • Who is This Book For?
  • Notes
  • Acknowledgements
  • 1 Fundamentals of Pattern Recognition
  • 1.1 Basic Concepts: Class, Feature, Data Set
  • 1.2 Classifier, Discriminant Functions, Classification Regions
  • 1.3 Classification Error and Classification Accuracy
  • 1.4 Experimental Comparison of Classifiers
  • 1.5 Bayes Decision Theory
  • 1.6 Clustering and Feature Selection
  • 1.7 Challenges of Real-Life Data
  • Appendix
  • 1.A.1 Data Generation1.A.2 Comparison of Classifiers
  • 1.A.3 Feature Selection
  • Notes
  • 2 Base Classifiers
  • 2.1 Linear and Quadratic Classifiers
  • 2.2 Decision Tree Classifiers
  • 2.3 The NaÃv̄e Bayes Classifier
  • 2.4 Neural Networks
  • 2.5 Support Vector Machines
  • 2.6 The k-Nearest Neighbor Classifier (k-nn)
  • 2.7 Final Remarks
  • Appendix
  • 2.A.1 Matlab Code for the Fish Data
  • 2.A.2 Matlab Code for Individual Classifiers
  • Notes
  • 3 An Overview of the Field
  • 3.1 Philosophy
  • 3.2 Two Examples
  • 3.3 Structure of the Area
  • 5.3 Nontrainable (Fixed) Combination Rules5.4 The Weighted Average (Linear Combiner)
  • 5.5 A Classifier as a Combiner
  • 5.6 An Example of Nine Combiners for Continuous-Valued Outputs
  • 5.7 To Train or Not to Train?
  • Appendix
  • 5.A.1 Theoretical Classification Error for the Simple Combiners
  • 5.A.2 Selected Matlab Code
  • Notes
  • 6 Ensemble Methods
  • 6.1 Bagging
  • 6.2 Random Forests
  • 6.3 Adaboost
  • 6.4 Random Subspace Ensembles
  • 6.5 Rotation Forest
  • 6.6 Random Linear Oracle
  • 6.7 Error Correcting Output Codes (ECOC)
  • Appendix
  • 6.A.1 Bagging6.A.2 AdaBoost
  • 6.A.3 Random Subspace
  • 6.A.4 Rotation Forest
  • 6.A.5 Random Linear Oracle
  • 6.A.6 Ecoc
  • Notes
  • 7 Classifier Selection
  • 7.1 Preliminaries
  • 7.2 Why Classifier Selection Works
  • 7.3 Estimating Local Competence Dynamically
  • 7.4 Pre-Estimation of the Competence Regions
  • 7.5 Simultaneous Training of Regions and Classifiers
  • 7.6 Cascade Classifiers
  • Appendix: Selected Matlab Code
  • 7.A.1 Banana Data
  • 7.A.2 Evolutionary Algorithm for a Selection Ensemble for the Banana Data