|
|
|
|
LEADER |
00000cam a2200000 i 4500 |
001 |
EBOOKCENTRAL_ocn878051089 |
003 |
OCoLC |
005 |
20240329122006.0 |
006 |
m o d |
007 |
cr ||||||||||| |
008 |
140422s2014 nju ob 001 0 eng |
010 |
|
|
|a 2014016123
|
040 |
|
|
|a DLC
|b eng
|e rda
|e pn
|c DLC
|d YDX
|d N$T
|d EBLCP
|d YDXCP
|d OCLCF
|d DG1
|d UMI
|d E7B
|d OCLCO
|d COO
|d DEBBG
|d OCLCQ
|d DEBSZ
|d B24X7
|d VT2
|d DG1
|d COCUF
|d DG1
|d MOR
|d LIP
|d PIFAG
|d ZCU
|d LIV
|d MERUC
|d OCLCQ
|d U3W
|d OCLCQ
|d STF
|d CEF
|d ICG
|d INT
|d AU@
|d OCLCQ
|d TKN
|d OCLCQ
|d DKC
|d OCLCQ
|d BRF
|d OCLCO
|d OCLCQ
|d OCLCO
|
066 |
|
|
|c (S
|
019 |
|
|
|a 891397879
|a 903257921
|a 913462197
|a 927509113
|a 961585666
|a 962644700
|a 1055373957
|a 1058722225
|a 1081219290
|
020 |
|
|
|a 9781118914540
|q (ePub)
|
020 |
|
|
|a 1118914546
|q (ePub)
|
020 |
|
|
|a 9781118914557
|q (Adobe PDF)
|
020 |
|
|
|a 1118914554
|q (Adobe PDF)
|
020 |
|
|
|a 9781118914564
|
020 |
|
|
|a 1118914562
|
020 |
|
|
|z 9781118315231
|q (hardback)
|
020 |
|
|
|z 1118315235
|q (hardback)
|
029 |
1 |
|
|a AU@
|b 000052794990
|
029 |
1 |
|
|a CHBIS
|b 010259798
|
029 |
1 |
|
|a CHNEW
|b 000943025
|
029 |
1 |
|
|a CHVBK
|b 480232423
|
029 |
1 |
|
|a DEBBG
|b BV042487511
|
029 |
1 |
|
|a DEBBG
|b BV044069807
|
029 |
1 |
|
|a DEBSZ
|b 431744548
|
029 |
1 |
|
|a DEBSZ
|b 434829099
|
029 |
1 |
|
|a DEBSZ
|b 449440737
|
029 |
1 |
|
|a DEBSZ
|b 485047659
|
029 |
1 |
|
|a NZ1
|b 15920915
|
035 |
|
|
|a (OCoLC)878051089
|z (OCoLC)891397879
|z (OCoLC)903257921
|z (OCoLC)913462197
|z (OCoLC)927509113
|z (OCoLC)961585666
|z (OCoLC)962644700
|z (OCoLC)1055373957
|z (OCoLC)1058722225
|z (OCoLC)1081219290
|
037 |
|
|
|a CL0500000553
|b Safari Books Online
|
042 |
|
|
|a pcc
|
050 |
0 |
0 |
|a TK7882.P3
|
072 |
|
7 |
|a COM
|x 000000
|2 bisacsh
|
082 |
0 |
0 |
|a 006.4
|2 23
|
084 |
|
|
|a TEC015000
|a COM016000
|a COM021030
|2 bisacsh
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Kuncheva, Ludmila I.
|q (Ludmila Ilieva),
|d 1959-
|
245 |
1 |
0 |
|a Combining pattern classifiers :
|b methods and algorithms /
|c Ludmila I. Kuncheva.
|
250 |
|
|
|a Second edition.
|
264 |
|
1 |
|a Hoboken, NJ :
|b Wiley,
|c 2014.
|
300 |
|
|
|a 1 online resource (xxi, 357 pages)
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
504 |
|
|
|a Includes bibliographical references and index.
|
520 |
|
|
|a "Combined classifiers, which are central to the ubiquitous performance of pattern recognition and machine learning, are generally considered more accurate than single classifiers. In a didactic, detailed assessment, Combining Pattern Classifiers examines the basic theories and tactics of classifier combination while presenting the most recent research in the field. Among the pattern recognition tasks that this book explores are mail sorting, face recognition, signature verification, decoding brain fMRI images, identifying emotions, analyzing gene microarray data, and spotting patterns in consumer preference. This updated second edition is equipped with the latest knowledge for academics, students, and practitioners involved in pattern recognition fields"--
|c Provided by publisher
|
520 |
|
|
|a "Classifier Combination is a field of growing interest within the very large area of Pattern Classification"--
|c Provided by publisher
|
588 |
0 |
|
|a Print version record and CIP data provided by publisher.
|
505 |
0 |
|
|6 880-01
|a Titlepage -- Copyright -- Dedication -- Preface -- The Playing Field -- Software -- Structure and What is New in the Second Edition -- Who is This Book For? -- Notes -- Acknowledgements -- 1 Fundamentals of Pattern Recognition -- 1.1 Basic Concepts: Class, Feature, Data Set -- 1.2 Classifier, Discriminant Functions, Classification Regions -- 1.3 Classification Error and Classification Accuracy -- 1.4 Experimental Comparison of Classifiers -- 1.5 Bayes Decision Theory -- 1.6 Clustering and Feature Selection -- 1.7 Challenges of Real-Life Data -- Appendix
|
505 |
8 |
|
|a 1.A.1 Data Generation1.A.2 Comparison of Classifiers -- 1.A.3 Feature Selection -- Notes -- 2 Base Classifiers -- 2.1 Linear and Quadratic Classifiers -- 2.2 Decision Tree Classifiers -- 2.3 The NaÃv̄e Bayes Classifier -- 2.4 Neural Networks -- 2.5 Support Vector Machines -- 2.6 The k-Nearest Neighbor Classifier (k-nn) -- 2.7 Final Remarks -- Appendix -- 2.A.1 Matlab Code for the Fish Data -- 2.A.2 Matlab Code for Individual Classifiers -- Notes -- 3 An Overview of the Field -- 3.1 Philosophy -- 3.2 Two Examples -- 3.3 Structure of the Area
|
505 |
8 |
|
|a 5.3 Nontrainable (Fixed) Combination Rules5.4 The Weighted Average (Linear Combiner) -- 5.5 A Classifier as a Combiner -- 5.6 An Example of Nine Combiners for Continuous-Valued Outputs -- 5.7 To Train or Not to Train? -- Appendix -- 5.A.1 Theoretical Classification Error for the Simple Combiners -- 5.A.2 Selected Matlab Code -- Notes -- 6 Ensemble Methods -- 6.1 Bagging -- 6.2 Random Forests -- 6.3 Adaboost -- 6.4 Random Subspace Ensembles -- 6.5 Rotation Forest -- 6.6 Random Linear Oracle -- 6.7 Error Correcting Output Codes (ECOC) -- Appendix
|
505 |
8 |
|
|a 6.A.1 Bagging6.A.2 AdaBoost -- 6.A.3 Random Subspace -- 6.A.4 Rotation Forest -- 6.A.5 Random Linear Oracle -- 6.A.6 Ecoc -- Notes -- 7 Classifier Selection -- 7.1 Preliminaries -- 7.2 Why Classifier Selection Works -- 7.3 Estimating Local Competence Dynamically -- 7.4 Pre-Estimation of the Competence Regions -- 7.5 Simultaneous Training of Regions and Classifiers -- 7.6 Cascade Classifiers -- Appendix: Selected Matlab Code -- 7.A.1 Banana Data -- 7.A.2 Evolutionary Algorithm for a Selection Ensemble for the Banana Data
|
590 |
|
|
|a ProQuest Ebook Central
|b Ebook Central Academic Complete
|
650 |
|
0 |
|a Pattern recognition systems.
|
650 |
|
0 |
|a Image processing
|x Digital techniques.
|
650 |
|
6 |
|a Reconnaissance des formes (Informatique)
|
650 |
|
6 |
|a Traitement d'images
|x Techniques numériques.
|
650 |
|
7 |
|a digital imaging.
|2 aat
|
650 |
|
7 |
|a TECHNOLOGY & ENGINEERING
|x Imaging Systems.
|2 bisacsh
|
650 |
|
7 |
|a COMPUTERS
|x Computer Vision & Pattern Recognition.
|2 bisacsh
|
650 |
|
7 |
|a COMPUTERS
|x Database Management
|x Data Mining.
|2 bisacsh
|
650 |
|
7 |
|a Image processing
|x Digital techniques
|2 fast
|
650 |
|
7 |
|a Pattern recognition systems
|2 fast
|
776 |
0 |
8 |
|i Print version:
|a Kuncheva, Ludmila I. (Ludmila Ilieva), 1959-
|t Combining pattern classifiers.
|b Second edition.
|d Hoboken, New Jersey : Wiley, [2014]
|z 9781118315231
|w (DLC) 2014014214
|w (OCoLC)878050954
|
856 |
4 |
0 |
|u https://ebookcentral.uam.elogim.com/lib/uam-ebooks/detail.action?docID=1762076
|z Texto completo
|
880 |
0 |
0 |
|6 505-01/(S
|g Machine generated contents note:
|g 1.
|t Fundamentals of Pattern Recognition --
|g 1.1.
|t Basic Concepts: Class, Feature, Data Set --
|g 1.1.1.
|t Classes and Class Labels --
|g 1.1.2.
|t Features --
|g 1.1.3.
|t Data Set --
|g 1.1.4.
|t Generate Your Own Data --
|g 1.2.
|t Classifier, Discriminant Functions, Classification Regions --
|g 1.3.
|t Classification Error and Classification Accuracy --
|g 1.3.1.
|t Where Does the Error Come FromBias and Variance --
|g 1.3.2.
|t Estimation of the Error --
|g 1.3.3.
|t Confusion Matrices and Loss Matrices --
|g 1.3.4.
|t Training and Testing Protocols --
|g 1.3.5.
|t Overtraining and Peeking --
|g 1.4.
|t Experimental Comparison of Classifiers --
|g 1.4.1.
|t Two Trained Classifiers and a Fixed Testing Set --
|g 1.4.2.
|t Two Classifier Models and a Single Data Set --
|g 1.4.3.
|t Two Classifier Models and Multiple Data Sets --
|g 1.4.4.
|t Multiple Classifier Models and Multiple Data Sets --
|g 1.5.
|t Bayes Decision Theory --
|g 1.5.1.
|t Probabilistic Framework --
|g 1.5.2.
|t Discriminant Functions and Decision Boundaries --
|g 1.5.3.
|t Bayes Error --
|g 1.6.
|t Clustering and Feature Selection --
|g 1.6.1.
|t Clustering --
|g 1.6.2.
|t Feature Selection --
|g 1.7.
|t Challenges of Real-Life Data --
|t Appendix --
|g 1.A.1.
|t Data Generation --
|g 1.A.2.
|t Comparison of Classifiers --
|g 1.A.2.1.
|t MATLAB Functions for Comparing Classifiers --
|g 1.A.2.2.
|t Critical Values for Wilcoxon and Sign Test --
|g 1.A.3.
|t Feature Selection --
|g 2.
|t Base Classifiers --
|g 2.1.
|t Linear and Quadratic Classifiers --
|g 2.1.1.
|t Linear Discriminant Classifier --
|g 2.1.2.
|t Nearest Mean Classifier --
|g 2.1.3.
|t Quadratic Discriminant Classifier --
|g 2.1.4.
|t Stability of LDC and QDC --
|g 2.2.
|t Decision Tree Classifiers --
|g 2.2.1.
|t Basics and Terminology --
|g 2.2.2.
|t Training of Decision Tree Classifiers --
|g 2.2.3.
|t Selection of the Feature for a Node --
|g 2.2.4.
|t Stopping Criterion --
|g 2.2.5.
|t Pruning of the Decision Tree --
|g 2.2.6.
|t C4.5 and ID3 --
|g 2.2.7.
|t Instability of Decision Trees --
|g 2.2.8.
|t Random Trees --
|g 2.3.
|t Naive Bayes Classifier --
|g 2.4.
|t Neural Networks --
|g 2.4.1.
|t Neurons --
|g 2.4.2.
|t Rosenblatt's Perceptron --
|g 2.4.3.
|t Multi-Layer Perceptron --
|g 2.5.
|t Support Vector Machines --
|g 2.5.1.
|t Why Would It Work--
|g 2.5.2.
|t Classification Margins --
|g 2.5.3.
|t Optimal Linear Boundary --
|g 2.5.4.
|t Parameters and Classification Boundaries of SVM --
|g 2.6.
|t κ-Nearest Neighbor Classifier (A:-nn) --
|g 2.7.
|t Final Remarks --
|g 2.7.1.
|t Simple or Complex Models--
|g 2.7.2.
|t Triangle Diagram --
|g 2.7.3.
|t Choosing a Base Classifier for Ensembles --
|t Appendix --
|g 2.A.1.
|t MATLAB Code for the Fish Data --
|g 2.A.2.
|t MATLAB Code for Individual Classifiers --
|g 2.A.2.1.
|t Decision Tree --
|g 2.A.2.2.
|t Naive Bayes --
|g 2.A.2.3.
|t Multi-Layer Perceptron --
|g 2.A.2.4.
|t 1-nn Classifier --
|g 3.
|t Overview of the Field --
|g 3.1.
|t Philosophy --
|g 3.2.
|t Two Examples --
|g 3.2.1.
|t Wisdom of the "Classifier Crowd" --
|g 3.2.2.
|t Power of Divide-and-Conquer --
|g 3.3.
|t Structure of the Area --
|g 3.3.1.
|t Terminology --
|g 3.3.2.
|t Taxonomy of Classifier Ensemble Methods --
|g 3.3.3.
|t Classifier Fusion and Classifier Selection --
|g 3.4.
|t Quo Vadis--
|g 3.4.1.
|t Reinventing the Wheel--
|g 3.4.2.
|t Illusion of Progress--
|g 3.4.3.
|t Bibliometric Snapshot --
|g 4.
|t Combining Label Outputs --
|g 4.1.
|t Types of Classifier Outputs --
|g 4.2.
|t Probabilistic Framework for Combining Label Outputs --
|g 4.3.
|t Majority Vote --
|g 4.3.1.
|t "Democracy" in Classifier Combination --
|g 4.3.2.
|t Accuracy of the Majority Vote --
|g 4.3.3.
|t Limits on the Majority Vote Accuracy: An Example --
|g 4.3.4.
|t Patterns of Success and Failure --
|g 4.3.5.
|t Optimality of the Majority Vote Combiner --
|g 4.4.
|t Weighted Majority Vote --
|g 4.4.1.
|t Two Examples --
|g 4.4.2.
|t Optimality of the Weighted Majority Vote Combiner --
|g 4.5.
|t Naive-Bayes Combiner --
|g 4.5.1.
|t Optimality of the Naive Bayes Combiner --
|g 4.5.2.
|t Implementation of the NB Combiner --
|g 4.6.
|t Multinomial Methods --
|g 4.7.
|t Comparison of Combination Methods for Label Outputs --
|t Appendix --
|g 4.A.1.
|t Matan's Proof for the Limits on the Majority Vote Accuracy --
|g 4.A.2.
|t Selected MATLAB Code --
|g 5.
|t Combining Continuous-Valued Outputs --
|g 5.1.
|t Decision Profile --
|g 5.2.
|t How Do We Get Probability Outputs--
|g 5.2.1.
|t Probabilities Based on Discriminant Scores --
|g 5.2.2.
|t Probabilities Based on Counts: Laplace Estimator --
|g 5.3.
|t Nontrainable (Fixed) Combination Rules --
|g 5.3.1.
|t Generic Formulation --
|g 5.3.2.
|t Equivalence of Simple Combination Rules --
|g 5.3.3.
|t Generalized Mean Combiner --
|g 5.3.4.
|t Theoretical Comparison of Simple Combiners --
|g 5.3.5.
|t Where Do They Come From--
|g 5.4.
|t Weighted Average (Linear Combiner) --
|g 5.4.1.
|t Consensus Theory --
|g 5.4.2.
|t Added Error for the Weighted Mean Combination --
|g 5.4.3.
|t Linear Regression --
|g 5.5.
|t Classifier as a Combiner --
|g 5.5.1.
|t Supra Bayesian Approach --
|g 5.5.2.
|t Decision Templates --
|g 5.5.3.
|t Linear Classifier --
|g 5.6.
|t Example of Nine Combiners for Continuous-Valued Outputs --
|g 5.7.
|t To Train or Not to Train--
|t Appendix --
|g 5.A.1.
|t Theoretical Classification Error for the Simple Combiners --
|g 5.A.1.1.
|t Set-up and Assumptions --
|g 5.A.1.2.
|t Individual Error --
|g 5.A.1.3.
|t Minimum and Maximum --
|g 5.A.1.4.
|t Average (Sum) --
|g 5.A.1.5.
|t Median and Majority Vote --
|g 5.A.1.6.
|t Oracle --
|g 5.A.2.
|t Selected MATLAB Code --
|g 6.
|t Ensemble Methods --
|g 6.1.
|t Bagging --
|g 6.1.1.
|t Origins: Bagging Predictors --
|g 6.1.2.
|t Why Does Bagging Work--
|g 6.1.3.
|t Out-of-bag Estimates --
|g 6.1.4.
|t Variants of Bagging --
|g 6.2.
|t Random Forests --
|g 6.3.
|t AdaBoost --
|g 6.3.1.
|t AdaBoost Algorithm --
|g 6.3.2.
|t arc-x4 Algorithm --
|g 6.3.3.
|t Why Does AdaBoost Work--
|g 6.3.4.
|t Variants of Boosting --
|g 6.3.5.
|t Famous Application: AdaBoost for Face Detection --
|g 6.4.
|t Random Subspace Ensembles --
|g 6.5.
|t Rotation Forest --
|g 6.6.
|t Random Linear Oracle --
|g 6.7.
|t Error Correcting Output Codes (ECOC) --
|g 6.7.1.
|t Code Designs --
|g 6.7.2.
|t Decoding --
|g 6.7.3.
|t Ensembles of Nested Dichotomies --
|t Appendix --
|g 6.A.1.
|t Bagging --
|g 6.A.2.
|t AdaBoost --
|g 6.A.3.
|t Random Subspace --
|g 6.A.4.
|t Rotation Forest --
|g 6.A.5.
|t Random Linear Oracle --
|g 6.A.6.
|t ECOC --
|g 7.
|t Classifier Selection --
|g 7.1.
|t Preliminaries --
|g 7.2.
|t Why Classifier Selection Works --
|g 7.3.
|t Estimating Local Competence Dynamically --
|g 7.3.1.
|t Decision-Independent Estimates --
|g 7.3.2.
|t Decision-Dependent Estimates --
|g 7.4.
|t Pre-Estimation of the Competence Regions --
|g 7.4.1.
|t Bespoke Classifiers --
|g 7.4.2.
|t Clustering and Selection --
|g 7.5.
|t Simultaneous Training of Regions and Classifiers --
|g 7.6.
|t Cascade Classifiers --
|t Appendix: Selected MATLAB Code --
|g 7.A.1.
|t Banana Data --
|g 7.A.2.
|t Evolutionary Algorithm for a Selection Ensemble for the Banana Data --
|g 8.
|t Diversity in Classifier Ensembles --
|g 8.1.
|t What Is Diversity--
|g 8.1.1.
|t Diversity for a Point-Value Estimate --
|g 8.1.2.
|t Diversity in Software Engineering --
|g 8.1.3.
|t Statistical Measures of Relationship --
|g 8.2.
|t Measuring Diversity in Classifier Ensembles --
|g 8.2.1.
|t Pairwise Measures --
|g 8.2.2.
|t Nonpairwise Measures --
|g 8.3.
|t Relationship Between Diversity and Accuracy --
|g 8.3.1.
|t Example --
|g 8.3.2.
|t Relationship Patterns --
|g 8.3.3.
|t Caveat: Independent Outputs [≠] Independent Errors --
|g 8.3.4.
|t Independence Is Not the Best Scenario --
|g 8.3.5.
|t Diversity and Ensemble Margins --
|g 8.4.
|t Using Diversity --
|g 8.4.1.
|t Diversity for Finding Bounds and Theoretical Relationships --
|g 8.4.2.
|t Kappa-error Diagrams and Ensemble Maps --
|g 8.4.3.
|t Overproduce and Select --
|g 8.5.
|t Conclusions: Diversity of Diversity --
|t Appendix --
|g 8.A.1.
|t Derivation of Diversity Measures for Oracle Outputs --
|g 8.A.1.1.
|t Correlation ρ --
|g 8.A.1.2.
|t Interrater Agreement κ --
|g 8.A.2.
|t Diversity Measure Equivalence --
|g 8.A.3.
|t Independent Outputs [≠] Independent Errors --
|g 8.A.4.
|t Bound on the Kappa-Error Diagram --
|g 8.A.5.
|t Calculation of the Pareto Frontier --
|g 9.
|t Ensemble Feature Selection --
|g 9.1.
|t Preliminaries --
|g 9.1.1.
|t Right and Wrong Protocols --
|g 9.1.2.
|t Ensemble Feature Selection Approaches --
|g 9.1.3.
|t Natural Grouping --
|g 9.2.
|t Ranking by Decision Tree Ensembles --
|g 9.2.1.
|t Simple Count and Split Criterion --
|g 9.2.2.
|t Permuted Features or the "Noised-up" Method --
|g 9.3.
|t Ensembles of Rankers --
|g 9.3.1.
|t Approach --
|g 9.3.2.
|t Ranking Methods (Criteria) --
|g 9.4.
|t Random Feature Selection for the Ensemble --
|g 9.4.1.
|t Random Subspace Revisited --
|g 9.4.2.
|t Usability, Coverage, and Feature Diversity --
|g 9.4.3.
|t Genetic Algorithms --
|g 9.5.
|t Nonrandom Selection --
|g 9.5.1.
|t "Favorite Class" Model --
|g 9.5.2.
|t Iterative Model
|
880 |
0 |
0 |
|g --
|g 9.5.3.
|t Incremental Model --
|g 9.6.
|t Stability Index --
|g 9.6.1.
|t Consistency Between a Pair of Subsets --
|g 9.6.2.
|t Stability Index for K Sequences --
|g 9.6.3.
|t Example of Applying the Stability Index --
|t Appendix --
|g 9.A.1.
|t MATLAB Code for the Numerical Example of Ensemble Ranking --
|g 9.A.2.
|t MATLAB GA Nuggets --
|g 9.A.3.
|t MATLAB Code for the Stability Index --
|g 10.
|t Final Thought.
|
938 |
|
|
|a Books 24x7
|b B247
|n bks00072712
|
938 |
|
|
|a EBL - Ebook Library
|b EBLB
|n EBL1762076
|
938 |
|
|
|a ebrary
|b EBRY
|n ebr10905958
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 827344
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 12029936
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 12673648
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 11632335
|
994 |
|
|
|a 92
|b IZTAP
|