|
|
|
|
LEADER |
00000cam a2200000Ii 4500 |
001 |
OR_on1019902016 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr unu|||||||| |
008 |
180119s2018 nyua o 001 0 eng c |
010 |
|
|
|a 2018286242
|
040 |
|
|
|a UMI
|b eng
|e rda
|e pn
|c UMI
|d STF
|d OCLCF
|d ALAUL
|d TOH
|d CEF
|d KSU
|d VT2
|d DEBBG
|d CNCEN
|d VOD
|d AU@
|d WYU
|d G3B
|d S9I
|d C6I
|d UAB
|d OCLCQ
|d YDX
|d EBLCP
|d N$T
|d WAU
|d GPM
|d UKAHL
|d OCLCO
|d OCLCQ
|d OCLCO
|
019 |
|
|
|a 1047633211
|a 1048161190
|a 1066449097
|a 1066509517
|a 1103272065
|a 1129377947
|a 1153000176
|a 1257075949
|
020 |
|
|
|z 978161729443
|q (electronic)
|
020 |
|
|
|a 9781617294433
|
020 |
|
|
|a 1617294438
|
020 |
|
|
|a 9781638352044
|q electronic book
|
020 |
|
|
|a 1638352046
|q electronic book
|
029 |
1 |
|
|a GBVCP
|b 1014937663
|
035 |
|
|
|a (OCoLC)1019902016
|z (OCoLC)1047633211
|z (OCoLC)1048161190
|z (OCoLC)1066449097
|z (OCoLC)1066509517
|z (OCoLC)1103272065
|z (OCoLC)1129377947
|z (OCoLC)1153000176
|z (OCoLC)1257075949
|
037 |
|
|
|a CL0500000931
|b Safari Books Online
|
050 |
|
4 |
|a QA76.73.P98
|b C465 2018eb
|
082 |
0 |
4 |
|a 005.133
|2 23
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Chollet, François,
|e author.
|
245 |
1 |
0 |
|a Deep learning with Python /
|c François Chollet.
|
264 |
|
1 |
|a Shelter Island, NY :
|b Manning Publications,
|c [2018]
|
264 |
|
4 |
|c ©2018
|
300 |
|
|
|a 1 online resource (361 pages) :
|b illustrations
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a data file
|2 rda
|
500 |
|
|
|a Includes index.
|
588 |
|
|
|a Description based on online resource; title from title page (viewed January 17, 2018).
|
520 |
|
|
|a Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects.
|
504 |
|
|
|a Includes bibliographical references and index.
|
505 |
0 |
|
|a Intro -- Deep Learning with Python -- François Chollet -- Copyright -- Brief Table of Contents -- Table of Contents -- Preface -- Acknowledgments -- About this Book -- Who should read this book -- Roadmap -- Software/hardware requirements -- Source code -- Book forum -- About the Author -- About the Cover -- Part 1. Fundamentals of deep learning -- Chapter 1. What is deep learning? -- 1.1. Artificial intelligence, machine learning, and deep learning -- 1.1.1. Artificial intelligence -- 1.1.2. Machine learning -- 1.1.3. Learning representations from data -- 1.1.4. The "deep" in deep learning -- 1.1.5. Understanding how deep learning works, in three figures -- 1.1.6. What deep learning has achieved so far -- 1.1.7. Don't believe the short-term hype -- 1.1.8. The promise of AI -- 1.2. Before deep learning: a brief history of machine learning -- 1.2.1. Probabilistic modeling -- 1.2.2. Early neural networks -- 1.2.3. Kernel methods -- 1.2.4. Decision trees, random forests, and gradient boosting machines -- 1.2.5. Back to neural networks -- 1.2.6. What makes deep learning different -- 1.2.7. The modern machine-learning landscape -- 1.3. Why deep learning? Why now? -- 1.3.1. Hardware -- 1.3.2. Data -- 1.3.3. Algorithms -- 1.3.4. A new wave of investment -- 1.3.5. The democratization of deep learning -- 1.3.6. Will it last? -- Chapter 2. Before we begin: the mathematical building blocks of neural networks -- 2.1. A first look at a neural network -- 2.2. Data representations for neural networks -- 2.2.1. Scalars (0D tensors) -- 2.2.2. Vectors (1D tensors) -- 2.2.3. Matrices (2D tensors) -- 2.2.4. 3D tensors and higher-dimensional tensors -- 2.2.5. Key attributes -- 2.2.6. Manipulating tensors in Numpy -- 2.2.7. The notion of data batches -- 2.2.8. Real-world examples of data tensors -- 2.2.9. Vector data -- 2.2.10. Timeseries data or sequence data.
|
505 |
8 |
|
|a 2.2.11. Image data -- 2.2.12. Video data -- 2.3. The gears of neural networks: tensor operations -- 2.3.1. Element-wise operations -- 2.3.2. Broadcasting -- 2.3.3. Tensor dot -- 2.3.4. Tensor reshaping -- 2.3.5. Geometric interpretation of tensor operations -- 2.3.6. A geometric interpretation of deep learning -- 2.4. The engine of neural networks: gradient-based optimization -- 2.4.1. What's a derivative? -- 2.4.2. Derivative of a tensor operation: the gradient -- 2.4.3. Stochastic gradient descent -- 2.4.4. Chaining derivatives: the Backpropagation algorithm -- 2.5. Looking back at our first example -- Chapter 3. Getting started with neural networks -- 3.1. Anatomy of a neural network -- 3.1.1. Layers: the building blocks of deep learning -- 3.1.2. Models: networks of layers -- 3.1.3. Loss functions and optimizers: keys to configuring the learning process -- 3.2. Introduction to Keras -- 3.2.1. Keras, TensorFlow, Theano, and CNTK -- 3.2.2. Developing with Keras: a quick overview -- 3.3. Setting up a deep-learning workstation -- 3.3.1. Jupyter notebooks: the preferred way to run deep-learning experiments -- 3.3.2. Getting Keras running: two options -- 3.3.3. Running deep-learning jobs in the cloud: pros and cons -- 3.3.4. What is the best GPU for deep learning? -- 3.4. Classifying movie reviews: a binary classification example -- 3.4.1. The IMDB dataset -- 3.4.2. Preparing the data -- 3.4.3. Building your network -- 3.4.4. Validating your approach -- 3.4.5. Using a trained network to generate predictions on new data -- 3.4.6. Further experiments -- 3.4.7. Wrapping up -- 3.5. Classifying newswires: a multiclass classification example -- 3.5.1. The Reuters dataset -- 3.5.2. Preparing the data -- 3.5.3. Building your network -- 3.5.4. Validating your approach -- 3.5.5. Generating predictions on new data.
|
505 |
8 |
|
|a 3.5.6. A different way to handle the labels and the loss -- 3.5.7. The importance of having sufficiently large intermediate layers -- 3.5.8. Further experiments -- 3.5.9. Wrapping up -- 3.6. Predicting house prices: a regression example -- 3.6.1. The Boston Housing Price dataset -- 3.6.2. Preparing the data -- 3.6.3. Building your network -- 3.6.4. Validating your approach using K-fold validation -- 3.6.5. Wrapping up -- Chapter 4. Fundamentals of machine learning -- 4.1. Four branches of machine learning -- 4.1.1. Supervised learning -- 4.1.2. Unsupervised learning -- 4.1.3. Self-supervised learning -- 4.1.4. Reinforcement learning -- 4.2. Evaluating machine-learning models -- 4.2.1. Training, validation, and test sets -- 4.2.2. Things to keep in mind -- 4.3. Data preprocessing, feature engineering, and feature learning -- 4.3.1. Data preprocessing for neural networks -- 4.3.2. Feature engineering -- 4.4. Overfitting and underfitting -- 4.4.1. Reducing the network's size -- 4.4.2. Adding weight regularization -- 4.4.3. Adding dropout -- 4.5. The universal workflow of machine learning -- 4.5.1. Defining the problem and assembling a dataset -- 4.5.2. Choosing a measure of success -- 4.5.3. Deciding on an evaluation protocol -- 4.5.4. Preparing your data -- 4.5.5. Developing a model that does better than a baseline -- 4.5.6. Scaling up: developing a model that overfits -- 4.5.7. Regularizing your model and tuning your hyperparameters -- Part 2. Deep learning in practice -- Chapter 5. Deep learning for computer vision -- 5.1. Introduction to convnets -- 5.1.1. The convolution operation -- 5.1.2. The max-pooling operation -- 5.2. Training a convnet from scratch on a small dataset -- 5.2.1. The relevance of deep learning for small-data problems -- 5.2.2. Downloading the data -- 5.2.3. Building your network -- 5.2.4. Data preprocessing.
|
505 |
8 |
|
|a 5.2.5. Using data augmentation -- 5.3. Using a pretrained convnet -- 5.3.1. Feature extraction -- 5.3.2. Fine-tuning -- 5.3.3. Wrapping up -- 5.4. Visualizing what convnets learn -- 5.4.1. Visualizing intermediate activations -- 5.4.2. Visualizing convnet filters -- 5.4.3. Visualizing heatmaps of class activation -- Chapter 6. Deep learning for text and sequences -- 6.1. Working with text data -- 6.1.1. One-hot encoding of words and characters -- 6.1.2. Using word embeddings -- 6.1.3. Putting it all together: from raw text to word embeddings -- 6.1.4. Wrapping up -- 6.2. Understanding recurrent neural networks -- 6.2.1. A recurrent layer in Keras -- 6.2.2. Understanding the LSTM and GRU layers -- 6.2.3. A concrete LSTM example in Keras -- 6.2.4. Wrapping up -- 6.3. Advanced use of recurrent neural networks -- 6.3.1. A temperature-forecasting problem -- 6.3.2. Preparing the data -- 6.3.3. A common-sense, non-machine-learning baseline -- 6.3.4. A basic machine-learning approach -- 6.3.5. A first recurrent baseline -- 6.3.6. Using recurrent dropout to fight overfitting -- 6.3.7. Stacking recurrent layers -- 6.3.8. Using bidirectional RNNs -- 6.3.9. Going even further -- 6.3.10. Wrapping up -- 6.4. Sequence processing with convnets -- 6.4.1. Understanding 1D convolution for sequence data -- 6.4.2. 1D pooling for sequence data -- 6.4.3. Implementing a 1D convnet -- 6.4.4. Combining CNNs and RNNs to process long sequences -- 6.4.5. Wrapping up -- Chapter 7. Advanced deep-learning best practices -- 7.1. Going beyond the Sequential model: the Keras functional API -- 7.1.1. Introduction to the functional API -- 7.1.2. Multi-input models -- 7.1.3. Multi-output models -- 7.1.4. Directed acyclic graphs of layers -- 7.1.5. Layer weight sharing -- 7.1.6. Models as layers -- 7.1.7. Wrapping up.
|
505 |
8 |
|
|a 7.2. Inspecting and monitoring deep-learning models using Keras callba- acks and TensorBoard -- 7.2.1. Using callbacks to act on a model during training -- 7.2.2. Introduction to TensorBoard: the TensorFlow visualization framework -- 7.2.3. Wrapping up -- 7.3. Getting the most out of your models -- 7.3.1. Advanced architecture patterns -- 7.3.2. Hyperparameter optimization -- 7.3.3. Model ensembling -- 7.3.4. Wrapping up -- Chapter 8. Generative deep learning -- 8.1. Text generation with LSTM -- 8.1.1. A brief history of generative recurrent networks -- 8.1.2. How do you generate sequence data? -- 8.1.3. The importance of the sampling strategy -- 8.1.4. Implementing character-level LSTM text generation -- 8.1.5. Wrapping up -- 8.2. DeepDream -- 8.2.1. Implementing DeepDream in Keras -- 8.2.2. Wrapping up -- 8.3. Neural style transfer -- 8.3.1. The content loss -- 8.3.2. The style loss -- 8.3.3. Neural style transfer in Keras -- 8.3.4. Wrapping up -- 8.4. Generating images with variational autoencoders -- 8.4.1. Sampling from latent spaces of images -- 8.4.2. Concept vectors for image editing -- 8.4.3. Variational autoencoders -- 8.4.4. Wrapping up -- 8.5. Introduction to generative adversarial networks -- 8.5.1. A schematic GAN implementation -- 8.5.2. A bag of tricks -- 8.5.3. The generator -- 8.5.4. The discriminator -- 8.5.5. The adversarial network -- 8.5.6. How to train your DCGAN -- 8.5.7. Wrapping up -- Chapter 9. Conclusions -- 9.1. Key concepts in review -- 9.1.1. Various approaches to AI -- 9.1.2. What makes deep learning special within the field of machine learning -- 9.1.3. How to think about deep learning -- 9.1.4. Key enabling technologies -- 9.1.5. The universal machine-learning workflow -- 9.1.6. Key network architectures -- 9.1.7. The space of possibilities -- 9.2. The limitations of deep learning.
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Python (Computer program language)
|
650 |
|
0 |
|a Machine learning.
|
650 |
|
0 |
|a Neural networks (Computer science)
|
650 |
|
6 |
|a Python (Langage de programmation)
|
650 |
|
6 |
|a Apprentissage automatique.
|
650 |
|
6 |
|a Réseaux neuronaux (Informatique)
|
650 |
|
7 |
|a Machine learning
|2 fast
|
650 |
|
7 |
|a Neural networks (Computer science)
|2 fast
|
650 |
|
7 |
|a Python (Computer program language)
|2 fast
|
776 |
0 |
8 |
|i Print version:
|a Chollet, François.
|t Deep learning with Python.
|d Shelter Island, NY : Manning Publications, [2018]
|w (DLC) 2018286242
|w (OCoLC)1019902016
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781617294433/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH39609125
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 302272810
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL6642860
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 2948810
|
994 |
|
|
|a 92
|b IZTAP
|