Advanced Deep Learning with Keras : Apply Deep Learning Techniques, Autoencoders, GANs, Variational Autoencoders, Deep Reinforcement Learning, Policy Gradients, and More.
This book covers advanced deep learning techniques to create successful AI. Using MLPs, CNNs, and RNNs as building blocks to more advanced techniques, you'll study deep neural network architectures, Autoencoders, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Deep...
Clasificación: | Libro Electrónico |
---|---|
Autor principal: | |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
Birmingham :
Packt Publishing Ltd,
2018.
|
Temas: | |
Acceso en línea: | Texto completo |
Tabla de Contenidos:
- Cover; Copyright; Packt upsell; Contributors; Table of Contents; Preface; Chapter 1: Introducing Advanced Deep Learning with Keras; Why is Keras the perfect deep learning library?; Installing Keras and TensorFlow; Implementing the core deep learning models
- MLPs, CNNs and RNNs; The difference between MLPs, CNNs, and RNNs; Multilayer perceptrons (MLPs); MNIST dataset; MNIST digits classifier model; Building a model using MLPs and Keras; Regularization; Output activation and loss function; Optimization; Performance evaluation; Model summary; Convolutional neural networks (CNNs); Convolution.
- Pooling operationsPerformance evaluation and model summary; Recurrent neural networks (RNNs); Conclusion; Chapter 2: Deep Neural Networks; Functional API; Creating a two-input and one-output model; Deep residual networks (ResNet); ResNet v2; Densely connected convolutional networks (DenseNet); Building a 100-layer DenseNet-BC for CIFAR10; Conclusion; References; Chapter 3: Autoencoders; Principles of autoencoders; Building autoencoders using Keras; Denoising autoencoder (DAE); Automatic colorization autoencoder; Conclusion; References; Chapter 4: Generative Adversarial Networks (GANs).
- An overview of GANsPrinciples of GANs; GAN implementation in Keras; Conditional GAN; Conclusion; References; Chapter 5: Improved GANs; Wasserstein GAN; Distance functions; Distance function in GANs; Use of Wasserstein loss; WGAN implementation using Keras; Least-squares GAN (LSGAN); Auxiliary classifier GAN (ACGAN); Conclusion; References; Chapter 6: Disentangled Representation GANs; Disentangled representations; InfoGAN; Implementation of InfoGAN in Keras; Generator outputs of InfoGAN; StackedGAN; Implementation of StackedGAN in Keras; Generator outputs of StackedGAN; Conclusion; Reference.
- Chapter 7: Cross-Domain GANsPrinciples of CycleGAN; The CycleGAN Model; Implementing CycleGAN using Keras; Generator outputs of CycleGAN; CycleGAN on MNIST and SVHN datasets; Conclusion; References; Chapter 8: Variational Autoencoders (VAEs); Principles of VAEs; Variational inference; Core equation; Optimization; Reparameterization trick; Decoder testing; VAEs in Keras; Using CNNs for VAEs; Conditional VAE (CVAE); -VAE: VAE with disentangled latent representations; Conclusion; References; Chapter 9: Deep Reinforcement Learning; Principles of reinforcement learning (RL); The Q value.
- Q-Learning exampleQ-Learning in Python; Nondeterministic environment; Temporal-difference learning; Q-Learning on OpenAI gym; Deep Q-Network (DQN); DQN on Keras; Double Q-Learning (DDQN); Conclusion; References; Chapter 10: Policy Gradient Methods; Policy gradient theorem; Monte Carlo policy gradient (REINFORCE) method; REINFORCE with baseline method; Actor-Critic method; Advantage Actor-Critic (A2C) method; Policy Gradient methods with Keras; Performance evaluation of policy gradient methods; Conclusion; References; Other Books You May Enjoy; Index.