Cargando…

Agile artificial intelligence in Pharo : implementing neural networks, genetic algorithms, and neuroevolution /

Cover classical algorithms commonly used as artificial intelligence techniques and program agile artificial intelligence applications using Pharo. This book takes a practical approach by presenting the implementation details to illustrate the numerous concepts it explains. Along the way, youll learn...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Bergel, Alexandre (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: [United States] : Apress, [2020]
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)
Tabla de Contenidos:
  • Intro
  • Table of Contents
  • About the Author
  • About the Technical Reviewer
  • Acknowledgments
  • Introduction
  • Part I: Neural Networks
  • Chapter 1: The Perceptron Model
  • 1.1 Perceptron as a Kind of Neuron
  • 1.2 Implementing the Perceptron
  • 1.3 Testing the Code
  • 1.4 Formulating Logical Expressions
  • 1.5 Handling Errors
  • 1.6 Combining Perceptrons
  • 1.7 Training a Perceptron
  • 1.8 Drawing Graphs
  • 1.9 Predicting and 2D Points
  • 1.10 Measuring the Precision
  • 1.11 Historical Perspective
  • 1.12 Exercises
  • 1.13 What Have We Seen in This Chapter?
  • 1.14 Further Reading About Pharo
  • Chapter 2: The Artificial Neuron
  • 2.1 Limit of the Perceptron
  • 2.2 Activation Function
  • 2.3 The Sigmoid Neuron
  • 2.4 Implementing the Activation Functions
  • 2.5 Extending the Neuron with the Activation Functions
  • 2.6 Adapting the Existing Tests
  • 2.7 Testing the Sigmoid Neuron
  • 2.8 Slower to Learn
  • 2.9 What Have We Seen in This Chapter?
  • Chapter 3: Neural Networks
  • 3.1 General Architecture
  • 3.2 Neural Layer
  • 3.3 Modeling a Neural Network
  • 3.4 Backpropagation
  • 3.4.1 Step 1: Forward Feeding
  • 3.4.2 Step 2: Error Backward Propagation
  • 3.4.3 Step 3: Updating Neuron Parameters
  • 3.5 What Have We Seen in This Chapter?
  • Chapter 4: Theory on Learning
  • 4.1 Loss Function
  • 4.2 Gradient Descent
  • 4.3 Parameter Update
  • 4.4 Gradient Descent in Our Implementation
  • 4.5 Stochastic Gradient Descent
  • 4.6 The Derivative of the Sigmoid Function
  • 4.7 What Have We Seen in This Chapter?
  • 4.8 Further Reading
  • Chapter 5: Data Classification
  • 5.1 Training a Network
  • 5.2 Neural Network as a Hashmap
  • 5.3 Visualizing the Error and the Topology
  • 5.4 Contradictory Data
  • 5.5 Classifying Data and One-Hot Encoding
  • 5.6 The Iris Dataset
  • 5.7 Training a Network with the Iris Dataset
  • 5.8 The Effect of the Learning Curve
  • 5.9 Testing and Validation
  • 5.10 Normalization
  • 5.11 Integrating Normalization into the NNetwork Class
  • 5.12 What Have We Seen in This Chapter?
  • Chapter 6: A Matrix Library
  • 6.1 Matrix Operations in C
  • 6.2 The Matrix Class
  • 6.3 Creating the Unit Test
  • 6.4 Accessing and Modifying the Content of a Matrix
  • 6.5 Summing Matrices
  • 6.6 Printing a Matrix
  • 6.7 Expressing Vectors
  • 6.8 Factors
  • 6.9 Dividing a Matrix by a Factor
  • 6.10 Matrix Product
  • 6.11 Matrix Subtraction
  • 6.12 Filling the Matrix with Random Numbers
  • 6.13 Summing the Matrix Values
  • 6.14 Transposing a Matrix
  • 6.15 Example
  • 6.16 What Have We Seen in This Chapter?
  • Chapter 7: Matrix-Based Neural Networks
  • 7.1 Defining a Matrix-Based Layer
  • 7.2 Defining a Matrix-Based Neural Network
  • 7.3 Visualizing the Results
  • 7.4 Iris Flower Dataset
  • 7.5 What Have We Seen in This Chapter?
  • Part II: Genetic Algorithms
  • Chapter 8: Genetic Algorithms
  • 8.1 Algorithms Inspired from Natural Evolution
  • 8.2 Example of a Genetic Algorithm
  • 8.3 Relevant Vocabulary