Cargando…

Introduction to pattern recognition and machine learning /

"This book adopts a detailed and methodological algorithmic approach to explain the concepts of pattern recognition. While the text provides a systematic account of its major topics such as pattern representation and nearest neighbour based classifiers, current topics -- neural networks, suppor...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autores principales: Murty, M. Narasimha (Autor), Devi, V. Susheela (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: New Jersey : World Scientific, [2015]
Colección:IISc lecture notes series ; 5.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Table of Contents; About the Authors; Preface; 1. Introduction; 1. Classifiers: An Introduction; 2. An Introduction to Clustering; 3. Machine Learning; Research Ideas; 2. Types of Data; 1. Features and Patterns; 2. Domain of a Variable; 3. Types of Features; 3.1. Nominal data; 3.1.1. Operations on nominal variables; 3.2. Ordinal data; 3.2.1. Operations possible on ordinal variables; 3.3. Interval-valued variables; 3.3.1. Operations possible on interval-valued variables; 3.4. Ratio variables; 3.5. Spatio-temporal data; 4. Proximity measures; 4.1. Fractional norms; 4.2. Are metrics essential?
  • 4.3. Similarity between vectors4.4. Proximity between spatial patterns; 4.5. Proximity between temporal patterns; 4.6. Mean dissimilarity; 4.7. Peak dissimilarity; 4.8. Correlation coefficient; 4.9. Dynamic Time Warping (DTW) distance; 4.9.1. Lower bounding the DTW distance; Research Ideas; 3. Feature Extraction and Feature Selection; 1. Types of Feature Selection; 2. Mutual Information (MI) for Feature Selection; 3. Chi-square Statistic; 4. Goodman-Kruskal Measure; 5. Laplacian Score; 6. Singular Value Decomposition (SVD); 7. Non-negative Matrix Factorization (NMF).
  • 8. Random Projections (RPs) for Feature Extraction8.1. Advantages of random projections; 9. Locality Sensitive Hashing (LSH); 10. Class Separability; 11. Genetic and Evolutionary Algorithms; 11.1. Hybrid GA for feature selection; 12. Ranking for Feature Selection; 12.1. Feature selection based on an optimization formulation; 12.2. Feature ranking using F-score; 12.3. Feature ranking using linear support vector machine (SVM) weight vector; 12.4. Ensemble feature ranking; 12.4.1. Using threshold-based feature selection techniques; 12.4.2. Evolutionary algorithm.
  • 12.5. Feature ranking using number of label changes13. Feature Selection for Time Series Data; 13.1. Piecewise aggregate approximation; 13.2. Spectral decomposition; 13.3. Wavelet decomposition; 13.4. Singular Value Decomposition (SVD); 13.5. Common principal component loading based variable subset selection (CLeVer); Research Ideas; 4. Bayesian Learning; 1. Document Classification; 2. Naive Bayes Classifier; 3. Frequency-Based Estimation of Probabilities; 4. Posterior Probability; 5. Density Estimation; 6. Conjugate Priors; Research Ideas; 5. Classification.
  • 1. Classification Without Learning2. Classification in High-Dimensional Spaces; 2.1. Fractional distance metrics; 2.2. Shrinkage-divergence proximity (SDP); 3. RandomForests; 3.1. Fuzzy random forests; 4. Linear Support Vector Machine (SVM); 4.1. SVM-kNN; 4.2. Adaptation of cutting plane algorithm; 4.3. Nystrom approximated SVM; 5. Logistic Regression; 6. Semi-supervised Classification; 6.1. Using clustering algorithms; 6.2. Using generative models; 6.3. Using low density separation; 6.4. Using graph-based methods; 6.5. Using co-training methods; 6.6. Using self-training methods.