|
|
|
|
LEADER |
00000cam a2200000 i 4500 |
001 |
OR_on1110716984 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr |n||||||||| |
008 |
190803s2019 enk ob 001 0 eng d |
040 |
|
|
|a YDX
|b eng
|e rda
|e pn
|c YDX
|d UKMGB
|d OCLCO
|d UKAHL
|d OCLCF
|d N$T
|d TEFOD
|d NJT
|d STF
|d EBLCP
|d YDXIT
|d UMI
|d UAB
|d OCLCQ
|d OCL
|d OCLCO
|d OCLCQ
|d OCLCO
|d NZAUC
|d OCLCQ
|
015 |
|
|
|a GBB9B4157
|2 bnb
|
016 |
7 |
|
|a 019446102
|2 Uk
|
019 |
|
|
|a 1111433895
|a 1114455518
|a 1126570352
|a 1131897942
|
020 |
|
|
|a 9781789344516
|q (electronic bk.)
|
020 |
|
|
|a 1789344514
|q (electronic bk.)
|
020 |
|
|
|z 1789344158
|
020 |
|
|
|z 9781789344158
|
029 |
1 |
|
|a AU@
|b 000066230922
|
029 |
1 |
|
|a AU@
|b 000067105683
|
029 |
1 |
|
|a UKMGB
|b 019446102
|
035 |
|
|
|a (OCoLC)1110716984
|z (OCoLC)1111433895
|z (OCoLC)1114455518
|z (OCoLC)1126570352
|z (OCoLC)1131897942
|
037 |
|
|
|a 9781789344516
|b Packt Publishing
|
037 |
|
|
|a BE91522B-7B10-48DE-80F7-9A89351EA7A8
|b OverDrive, Inc.
|n http://www.overdrive.com
|
050 |
|
4 |
|a QA76.73.P98
|
082 |
0 |
4 |
|a 006.31
|2 23
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Ravichandiran, Sudharsan,
|e author.
|
245 |
1 |
0 |
|a Hands-on deep learning algorithms with Python :
|b master deep learning algorithms with extensive math by implementing them using TensorFlow /
|c Sudharsan Ravichandiran.
|
264 |
|
1 |
|a Birmingham :
|b Packt Publishing Ltd,
|c 2019.
|
300 |
|
|
|a 1 online resource
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
520 |
|
|
|a This book introduces basic-to-advanced deep learning algorithms used in a production environment by AI researchers and principal data scientists; it explains algorithms intuitively, including the underlying math, and shows how to implement them using popular Python-based deep learning libraries such as TensorFlow.
|
504 |
|
|
|a Includes bibliographical references and index.
|
505 |
0 |
|
|a Cover; Title Page; Copyright and Credits; Dedication; About Packt; Contributors; Table of Contents; Preface; Section 1: Getting Started with Deep Learning; Chapter 1: Introduction to Deep Learning; What is deep learning?; Biological and artificial neurons; ANN and its layers; Input layer; Hidden layer; Output layer; Exploring activation functions; The sigmoid function; The tanh function; The Rectified Linear Unit function; The leaky ReLU function; The Exponential linear unit function; The Swish function; The softmax function; Forward propagation in ANN; How does ANN learn?
|
505 |
8 |
|
|a Debugging gradient descent with gradient checkingPutting it all together; Building a neural network from scratch; Summary; Questions; Further reading; Chapter 2: Getting to Know TensorFlow; What is TensorFlow?; Understanding computational graphs and sessions; Sessions; Variables, constants, and placeholders; Variables; Constants; Placeholders and feed dictionaries; Introducing TensorBoard; Creating a name scope; Handwritten digit classification using TensorFlow; Importing the required libraries; Loading the dataset; Defining the number of neurons in each layer; Defining placeholders
|
505 |
8 |
|
|a Forward propagationComputing loss and backpropagation; Computing accuracy; Creating summary; Training the model; Visualizing graphs in TensorBoard; Introducing eager execution; Math operations in TensorFlow; TensorFlow 2.0 and Keras; Bonjour Keras; Defining the model; Defining a sequential model; Defining a functional model; Compiling the model; Training the model; Evaluating the model; MNIST digit classification using TensorFlow 2.0; Should we use Keras or TensorFlow?; Summary; Questions; Further reading; Section 2: Fundamental Deep Learning Algorithms
|
505 |
8 |
|
|a Chapter 3: Gradient Descent and Its VariantsDemystifying gradient descent; Performing gradient descent in regression; Importing the libraries; Preparing the dataset; Defining the loss function; Computing the gradients of the loss function; Updating the model parameters; Gradient descent versus stochastic gradient descent; Momentum-based gradient descent; Gradient descent with momentum; Nesterov accelerated gradient; Adaptive methods of gradient descent; Setting a learning rate adaptively using Adagrad; Doing away with the learning rate using Adadelta
|
505 |
8 |
|
|a Overcoming the limitations of Adagrad using RMSPropAdaptive moment estimation; Adamax -- Adam based on infinity-norm; Adaptive moment estimation with AMSGrad; Nadam -- adding NAG to ADAM; Summary; Questions; Further reading; Chapter 4: Generating Song Lyrics Using RNN; Introducing RNNs; The difference between feedforward networks and RNNs; Forward propagation in RNNs; Backpropagating through time; Gradients with respect to the hidden to output weight, V; Gradients with respect to hidden to hidden layer weights, W; Gradients with respect to input to the hidden layer weight, U
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
630 |
0 |
0 |
|a TensorFlow.
|
650 |
|
0 |
|a Python (Computer program language)
|
650 |
|
0 |
|a Application software
|x Development.
|
650 |
|
6 |
|a Python (Langage de programmation)
|
650 |
|
6 |
|a Logiciels d'application
|x Développement.
|
650 |
|
7 |
|a Application software
|x Development.
|2 fast
|0 (OCoLC)fst00811707
|
650 |
|
7 |
|a Computer algorithms.
|2 fast
|0 (OCoLC)fst00872010
|
650 |
|
7 |
|a Machine learning.
|2 fast
|0 (OCoLC)fst01004795
|
650 |
|
7 |
|a Python (Computer program language)
|2 fast
|0 (OCoLC)fst01084736
|
776 |
0 |
8 |
|i Print version:
|z 1789344158
|z 9781789344158
|w (OCoLC)1083564019
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781789344158/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH36385287
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL5841215
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 2207102
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 16379065
|
994 |
|
|
|a 92
|b IZTAP
|