Cargando…

Machine Learning Quick Reference : Quick and Essential Machine Learning Hacks for Training Smart Data Models.

Machine learning involves development and training of models used to predict future outcomes. This book is a practical guide to all the tips and tricks related to machine learning. It includes hands-on, easy to access techniques on topics like model selection, performance tuning, training neural net...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Kumar, Rahul
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Birmingham : Packt Publishing Ltd, 2019.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Cover; Title Page; Copyright and Credits; About Packt; Contributors; Table of Contents; Preface; Chapter 1: Quantifying Learning Algorithms; Statistical models; Learning curve; Machine learning; Wright's model; Curve fitting; Residual; Statistical modeling
  • the two cultures of Leo Breiman; Training data development data
  • test data; Size of the training, development, and test set; Bias-variance trade off; Regularization; Ridge regression (L2); Least absolute shrinkage and selection operator ; Cross-validation and model selection; K-fold cross-validation
  • Model selection using cross-validation0.632 rule in bootstrapping; Model evaluation; Confusion matrix; Receiver operating characteristic curve; Area under ROC; H-measure; Dimensionality reduction; Summary; Chapter 2: Evaluating Kernel Learning; Introduction to vectors; Magnitude of the vector; Dot product; Linear separability; Hyperplanes ; SVM; Support vector; Kernel trick; Kernel; Back to Kernel trick; Kernel types; Linear kernel; Polynomial kernel; Gaussian kernel; SVM example and parameter optimization through grid search; Summary; Chapter 3: Performance in Ensemble Learning
  • What is ensemble learning?Ensemble methods ; Bootstrapping; Bagging; Decision tree; Tree splitting; Parameters of tree splitting; Random forest algorithm; Case study; Boosting; Gradient boosting; Parameters of gradient boosting; Summary; Chapter 4: Training Neural Networks; Neural networks; How a neural network works; Model initialization; Loss function; Optimization; Computation in neural networks; Calculation of activation for H1; Backward propagation; Activation function; Types of activation functions; Network initialization; Backpropagation; Overfitting; Prevention of overfitting in NNs
  • Vanishing gradient Overcoming vanishing gradient; Recurrent neural networks; Limitations of RNNs; Use case; Summary; Chapter 5: Time Series Analysis; Introduction to time series analysis; White noise; Detection of white noise in a series; Random walk; Autoregression; Autocorrelation; Stationarity; Detection of stationarity; AR model; Moving average model; Autoregressive integrated moving average; Optimization of parameters; AR model; ARIMA model; Anomaly detection; Summary; Chapter 6: Natural Language Processing; Text corpus; Sentences; Words; Bags of words; TF-IDF
  • Executing the count vectorizerExecuting TF-IDF in Python; Sentiment analysis; Sentiment classification; TF-IDF feature extraction; Count vectorizer bag of words feature extraction; Model building count vectorization; Topic modeling ; LDA architecture; Evaluating the model; Visualizing the LDA; The Naive Bayes technique in text classification; The Bayes theorem; How the Naive Bayes classifier works; Summary; Chapter 7: Temporal and Sequential Pattern Discovery; Association rules; Apriori algorithm; Finding association rules; Frequent pattern growth; Frequent pattern tree growth; Validation