Cargando…

Deep learning with Python /

Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects.

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Chollet, François (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Shelter Island, NY : Manning Publications, [2018]
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)
Tabla de Contenidos:
  • Intro
  • Deep Learning with Python
  • François Chollet
  • Copyright
  • Brief Table of Contents
  • Table of Contents
  • Preface
  • Acknowledgments
  • About this Book
  • Who should read this book
  • Roadmap
  • Software/hardware requirements
  • Source code
  • Book forum
  • About the Author
  • About the Cover
  • Part 1. Fundamentals of deep learning
  • Chapter 1. What is deep learning?
  • 1.1. Artificial intelligence, machine learning, and deep learning
  • 1.1.1. Artificial intelligence
  • 1.1.2. Machine learning
  • 1.1.3. Learning representations from data
  • 1.1.4. The "deep" in deep learning
  • 1.1.5. Understanding how deep learning works, in three figures
  • 1.1.6. What deep learning has achieved so far
  • 1.1.7. Don't believe the short-term hype
  • 1.1.8. The promise of AI
  • 1.2. Before deep learning: a brief history of machine learning
  • 1.2.1. Probabilistic modeling
  • 1.2.2. Early neural networks
  • 1.2.3. Kernel methods
  • 1.2.4. Decision trees, random forests, and gradient boosting machines
  • 1.2.5. Back to neural networks
  • 1.2.6. What makes deep learning different
  • 1.2.7. The modern machine-learning landscape
  • 1.3. Why deep learning? Why now?
  • 1.3.1. Hardware
  • 1.3.2. Data
  • 1.3.3. Algorithms
  • 1.3.4. A new wave of investment
  • 1.3.5. The democratization of deep learning
  • 1.3.6. Will it last?
  • Chapter 2. Before we begin: the mathematical building blocks of neural networks
  • 2.1. A first look at a neural network
  • 2.2. Data representations for neural networks
  • 2.2.1. Scalars (0D tensors)
  • 2.2.2. Vectors (1D tensors)
  • 2.2.3. Matrices (2D tensors)
  • 2.2.4. 3D tensors and higher-dimensional tensors
  • 2.2.5. Key attributes
  • 2.2.6. Manipulating tensors in Numpy
  • 2.2.7. The notion of data batches
  • 2.2.8. Real-world examples of data tensors
  • 2.2.9. Vector data
  • 2.2.10. Timeseries data or sequence data.
  • 2.2.11. Image data
  • 2.2.12. Video data
  • 2.3. The gears of neural networks: tensor operations
  • 2.3.1. Element-wise operations
  • 2.3.2. Broadcasting
  • 2.3.3. Tensor dot
  • 2.3.4. Tensor reshaping
  • 2.3.5. Geometric interpretation of tensor operations
  • 2.3.6. A geometric interpretation of deep learning
  • 2.4. The engine of neural networks: gradient-based optimization
  • 2.4.1. What's a derivative?
  • 2.4.2. Derivative of a tensor operation: the gradient
  • 2.4.3. Stochastic gradient descent
  • 2.4.4. Chaining derivatives: the Backpropagation algorithm
  • 2.5. Looking back at our first example
  • Chapter 3. Getting started with neural networks
  • 3.1. Anatomy of a neural network
  • 3.1.1. Layers: the building blocks of deep learning
  • 3.1.2. Models: networks of layers
  • 3.1.3. Loss functions and optimizers: keys to configuring the learning process
  • 3.2. Introduction to Keras
  • 3.2.1. Keras, TensorFlow, Theano, and CNTK
  • 3.2.2. Developing with Keras: a quick overview
  • 3.3. Setting up a deep-learning workstation
  • 3.3.1. Jupyter notebooks: the preferred way to run deep-learning experiments
  • 3.3.2. Getting Keras running: two options
  • 3.3.3. Running deep-learning jobs in the cloud: pros and cons
  • 3.3.4. What is the best GPU for deep learning?
  • 3.4. Classifying movie reviews: a binary classification example
  • 3.4.1. The IMDB dataset
  • 3.4.2. Preparing the data
  • 3.4.3. Building your network
  • 3.4.4. Validating your approach
  • 3.4.5. Using a trained network to generate predictions on new data
  • 3.4.6. Further experiments
  • 3.4.7. Wrapping up
  • 3.5. Classifying newswires: a multiclass classification example
  • 3.5.1. The Reuters dataset
  • 3.5.2. Preparing the data
  • 3.5.3. Building your network
  • 3.5.4. Validating your approach
  • 3.5.5. Generating predictions on new data.
  • 3.5.6. A different way to handle the labels and the loss
  • 3.5.7. The importance of having sufficiently large intermediate layers
  • 3.5.8. Further experiments
  • 3.5.9. Wrapping up
  • 3.6. Predicting house prices: a regression example
  • 3.6.1. The Boston Housing Price dataset
  • 3.6.2. Preparing the data
  • 3.6.3. Building your network
  • 3.6.4. Validating your approach using K-fold validation
  • 3.6.5. Wrapping up
  • Chapter 4. Fundamentals of machine learning
  • 4.1. Four branches of machine learning
  • 4.1.1. Supervised learning
  • 4.1.2. Unsupervised learning
  • 4.1.3. Self-supervised learning
  • 4.1.4. Reinforcement learning
  • 4.2. Evaluating machine-learning models
  • 4.2.1. Training, validation, and test sets
  • 4.2.2. Things to keep in mind
  • 4.3. Data preprocessing, feature engineering, and feature learning
  • 4.3.1. Data preprocessing for neural networks
  • 4.3.2. Feature engineering
  • 4.4. Overfitting and underfitting
  • 4.4.1. Reducing the network's size
  • 4.4.2. Adding weight regularization
  • 4.4.3. Adding dropout
  • 4.5. The universal workflow of machine learning
  • 4.5.1. Defining the problem and assembling a dataset
  • 4.5.2. Choosing a measure of success
  • 4.5.3. Deciding on an evaluation protocol
  • 4.5.4. Preparing your data
  • 4.5.5. Developing a model that does better than a baseline
  • 4.5.6. Scaling up: developing a model that overfits
  • 4.5.7. Regularizing your model and tuning your hyperparameters
  • Part 2. Deep learning in practice
  • Chapter 5. Deep learning for computer vision
  • 5.1. Introduction to convnets
  • 5.1.1. The convolution operation
  • 5.1.2. The max-pooling operation
  • 5.2. Training a convnet from scratch on a small dataset
  • 5.2.1. The relevance of deep learning for small-data problems
  • 5.2.2. Downloading the data
  • 5.2.3. Building your network
  • 5.2.4. Data preprocessing.
  • 5.2.5. Using data augmentation
  • 5.3. Using a pretrained convnet
  • 5.3.1. Feature extraction
  • 5.3.2. Fine-tuning
  • 5.3.3. Wrapping up
  • 5.4. Visualizing what convnets learn
  • 5.4.1. Visualizing intermediate activations
  • 5.4.2. Visualizing convnet filters
  • 5.4.3. Visualizing heatmaps of class activation
  • Chapter 6. Deep learning for text and sequences
  • 6.1. Working with text data
  • 6.1.1. One-hot encoding of words and characters
  • 6.1.2. Using word embeddings
  • 6.1.3. Putting it all together: from raw text to word embeddings
  • 6.1.4. Wrapping up
  • 6.2. Understanding recurrent neural networks
  • 6.2.1. A recurrent layer in Keras
  • 6.2.2. Understanding the LSTM and GRU layers
  • 6.2.3. A concrete LSTM example in Keras
  • 6.2.4. Wrapping up
  • 6.3. Advanced use of recurrent neural networks
  • 6.3.1. A temperature-forecasting problem
  • 6.3.2. Preparing the data
  • 6.3.3. A common-sense, non-machine-learning baseline
  • 6.3.4. A basic machine-learning approach
  • 6.3.5. A first recurrent baseline
  • 6.3.6. Using recurrent dropout to fight overfitting
  • 6.3.7. Stacking recurrent layers
  • 6.3.8. Using bidirectional RNNs
  • 6.3.9. Going even further
  • 6.3.10. Wrapping up
  • 6.4. Sequence processing with convnets
  • 6.4.1. Understanding 1D convolution for sequence data
  • 6.4.2. 1D pooling for sequence data
  • 6.4.3. Implementing a 1D convnet
  • 6.4.4. Combining CNNs and RNNs to process long sequences
  • 6.4.5. Wrapping up
  • Chapter 7. Advanced deep-learning best practices
  • 7.1. Going beyond the Sequential model: the Keras functional API
  • 7.1.1. Introduction to the functional API
  • 7.1.2. Multi-input models
  • 7.1.3. Multi-output models
  • 7.1.4. Directed acyclic graphs of layers
  • 7.1.5. Layer weight sharing
  • 7.1.6. Models as layers
  • 7.1.7. Wrapping up.
  • 7.2. Inspecting and monitoring deep-learning models using Keras callba- acks and TensorBoard
  • 7.2.1. Using callbacks to act on a model during training
  • 7.2.2. Introduction to TensorBoard: the TensorFlow visualization framework
  • 7.2.3. Wrapping up
  • 7.3. Getting the most out of your models
  • 7.3.1. Advanced architecture patterns
  • 7.3.2. Hyperparameter optimization
  • 7.3.3. Model ensembling
  • 7.3.4. Wrapping up
  • Chapter 8. Generative deep learning
  • 8.1. Text generation with LSTM
  • 8.1.1. A brief history of generative recurrent networks
  • 8.1.2. How do you generate sequence data?
  • 8.1.3. The importance of the sampling strategy
  • 8.1.4. Implementing character-level LSTM text generation
  • 8.1.5. Wrapping up
  • 8.2. DeepDream
  • 8.2.1. Implementing DeepDream in Keras
  • 8.2.2. Wrapping up
  • 8.3. Neural style transfer
  • 8.3.1. The content loss
  • 8.3.2. The style loss
  • 8.3.3. Neural style transfer in Keras
  • 8.3.4. Wrapping up
  • 8.4. Generating images with variational autoencoders
  • 8.4.1. Sampling from latent spaces of images
  • 8.4.2. Concept vectors for image editing
  • 8.4.3. Variational autoencoders
  • 8.4.4. Wrapping up
  • 8.5. Introduction to generative adversarial networks
  • 8.5.1. A schematic GAN implementation
  • 8.5.2. A bag of tricks
  • 8.5.3. The generator
  • 8.5.4. The discriminator
  • 8.5.5. The adversarial network
  • 8.5.6. How to train your DCGAN
  • 8.5.7. Wrapping up
  • Chapter 9. Conclusions
  • 9.1. Key concepts in review
  • 9.1.1. Various approaches to AI
  • 9.1.2. What makes deep learning special within the field of machine learning
  • 9.1.3. How to think about deep learning
  • 9.1.4. Key enabling technologies
  • 9.1.5. The universal machine-learning workflow
  • 9.1.6. Key network architectures
  • 9.1.7. The space of possibilities
  • 9.2. The limitations of deep learning.