Cargando…

Pattern classification using ensemble methods /

Researchers from various disciplines such as pattern recognition, statistics, and machine learning have explored the use of ensemble methodology since the late seventies. Thus, they are faced with a wide variety of methods, given the growing interest in the field. This book aims to impose a degree o...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Rokach, Lior
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Hackensack, N.J. ; London : World Scientific, ©2010.
Colección:Series in machine perception and artificial intelligence ; v. 75.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Preface; 1. Introduction to Pattern Classification; 1.1 Pattern Classification; 1.2 Induction Algorithms; 1.3 Rule Induction; 1.4 Decision Trees; 1.5 Bayesian Methods; 1.5.1 Overview.; 1.5.2 Naıve Bayes; 1.5.2.1 The Basic Naıve Bayes Classifier; 1.5.2.2 Naıve Bayes Induction for Numeric Attributes; 1.5.2.3 Correction to the Probability Estimation; 1.5.2.4 Laplace Correction; 1.5.2.5 No Match; 1.5.3 Other Bayesian Methods; 1.6 Other Induction Methods; 1.6.1 Neural Networks; 1.6.2 Genetic Algorithms; 1.6.3 Instance-based Learning; 1.6.4 Support Vector Machines
  • 2. Introduction to Ensemble Learning 2.1 Back to the Roots; 2.2 The Wisdom of Crowds; 2.3 The Bagging Algorithm; 2.4 The Boosting Algorithm; 2.5 The Ada Boost Algorithm; 2.6 No Free Lunch Theorem and Ensemble Learning; 2.7 Bias-Variance Decomposition and Ensemble Learning; 2.8 Occam's Razor and Ensemble Learning; 2.9 Classifier Dependency; 2.9.1 Dependent Methods; 2.9.1.1 Model-guided Instance Selection; 2.9.1.2 Basic Boosting Algorithms; 2.9.1.3 Advanced Boosting Algorithms; 2.9.1.4 Incremental Batch Learning; 2.9.2 Independent Methods; 2.9.2.1 Bagging; 2.9.2.2 Wagging
  • 2.9.2.3 Random Forest and Random Subspace Projection 2.9.2.4 Non-Linear Boosting Projection (NLBP); 2.9.2.5 Cross-validated Committees; 2.9.2.6 Robust Boosting; 2.10 Ensemble Methods for Advanced Classification Tasks; 2.10.1 Cost-Sensitive Classification; 2.10.2 Ensemble for Learning Concept Drift; 2.10.3 Reject Driven Classification; 3. Ensemble Classification; 3.1 Fusions Methods; 3.1.1 Weighting Methods; 3.1.2 Majority Voting; 3.1.3 Performance Weighting; 3.1.4 Distribution Summation; 3.1.5 Bayesian Combination; 3.1.6 Dempster-Shafer; 3.1.7 Vogging; 3.1.8 Naıve Bayes
  • 3.1.9 Entropy Weighting 3.1.10 Density-based Weighting; 3.1.11 DEA Weighting Method; 3.1.12 Logarithmic Opinion Pool; 3.1.13 Order Statistics; 3.2 Selecting Classification; 3.2.1 Partitioning the Instance Space; 3.2.1.1 The K-Means Algorithm as a Decomposition Tool; 3.2.1.2 Determining the Number of Subsets; 3.2.1.3 The Basic K-Classifier Algorithm; 3.2.1.4 The Heterogeneity Detecting K-Classifier (HDK-Classifier); 3.2.1.5 Running-Time Complexity; 3.3 Mixture of Experts and Meta Learning; 3.3.1 Stacking; 3.3.2 Arbiter Trees; 3.3.3 Combiner Trees; 3.3.4 Grading; 3.3.5 Gating Network
  • 4. Ensemble Diversity 4.1 Overview; 4.2 Manipulating the Inducer; 4.2.1 Manipulation of the Inducer's Parameters; 4.2.2 Starting Point in Hypothesis Space; 4.2.3 Hypothesis Space Traversal; 4.3 Manipulating the Training Samples; 4.3.1 Resampling; 4.3.2 Creation; 4.3.3 Partitioning; 4.4 Manipulating the Target Attribute Representation; 4.4.1 Label Switching; 4.5 Partitioning the Search Space; 4.5.1 Divide and Conquer; 4.5.2 Feature Subset-based Ensemble Methods; 4.5.2.1 Random-based Strategy; 4.5.2.2 Reduct-based Strategy; 4.5.2.3 Collective-Performance-based Strategy