Cargando…

Pro deep learning with TensorFlow 2.0 : a mathematical approach to advanced artificial intelligence in Python /

This book builds upon the foundations established in its first edition, with updated chapters and the latest code implementations to bring it up to date with Tensorflow 2.0. Pro Deep Learning with TensorFlow 2.0 begins with the mathematical and core technical foundations of deep learning. Next, you...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Pattanayak, Santanu (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: New York, NY : Apress, [2023]
Edición:Second edition.
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)

MARC

LEADER 00000cam a22000007i 4500
001 OR_on1356976064
003 OCoLC
005 20231017213018.0
006 m o d
007 cr cnu|||unuuu
008 230106t20232023nyua ob 001 0 eng d
040 |a ORMDA  |b eng  |e rda  |e pn  |c ORMDA  |d EBLCP  |d YDX  |d GW5XE  |d UKAHL  |d OCLCF  |d OCLCO 
019 |a 1356794111 
020 |a 9781484289310  |q (electronic bk.) 
020 |a 1484289315  |q (electronic bk.) 
020 |z 9781484289303 
020 |z 1484289307 
024 7 |a 10.1007/978-1-4842-8931-0  |2 doi 
029 1 |a AU@  |b 000073289685 
029 1 |a AU@  |b 000073290262 
035 |a (OCoLC)1356976064  |z (OCoLC)1356794111 
037 |a 9781484289310  |b O'Reilly Media 
050 4 |a Q325.5  |b .P37 2023 
072 7 |a UYQ  |2 bicssc 
072 7 |a COM004000  |2 bisacsh 
072 7 |a UYQ  |2 thema 
082 0 4 |a 006.3/1  |2 23/eng/20230106 
049 |a UAMI 
100 1 |a Pattanayak, Santanu,  |e author. 
245 1 0 |a Pro deep learning with TensorFlow 2.0 :  |b a mathematical approach to advanced artificial intelligence in Python /  |c Santanu Pattanayak. 
250 |a Second edition. 
264 1 |a New York, NY :  |b Apress,  |c [2023] 
264 4 |c Ã2023 
300 |a 1 online resource (667 pages) :  |b illustrations 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
504 |a Includes bibliographical references and index. 
520 |a This book builds upon the foundations established in its first edition, with updated chapters and the latest code implementations to bring it up to date with Tensorflow 2.0. Pro Deep Learning with TensorFlow 2.0 begins with the mathematical and core technical foundations of deep learning. Next, you will learn about convolutional neural networks, including new convolutional methods such as dilated convolution, depth-wise separable convolution, and their implementation. You'll then gain an understanding of natural language processing in advanced network architectures such as transformers and various attention mechanisms relevant to natural language processing and neural networks in general. As you progress through the book, you'll explore unsupervised learning frameworks that reflect the current state of deep learning methods, such as autoencoders and variational autoencoders. The final chapter covers the advanced topic of generative adversarial networks and their variants, such as cycle consistency GANs and graph neural network techniques such as graph attention networks and GraphSAGE. Upon completing this book, you will understand the mathematical foundations and concepts of deep learning, and be able to use the prototypes demonstrated to build new deep learning applications. What You Will Learn Understand full-stack deep learning using TensorFlow 2.0 Gain an understanding of the mathematical foundations of deep learning Deploy complex deep learning solutions in production using TensorFlow 2.0 Understand generative adversarial networks, graph attention networks, and GraphSAGE Who This Book Is For: Data scientists and machine learning professionals, software developers, graduate students, and open source enthusiasts. 
505 0 |a Intro -- Table of Contents -- About the Author -- About the Technical Reviewer -- Introduction -- Chapter 1: Mathematical Foundations -- Linear Algebra -- Vector -- Scalar -- Matrix -- Tensor -- Matrix Operations and Manipulations -- Addition of Two Matrices -- Subtraction of Two Matrices -- Product of Two Matrices -- Transpose of a Matrix -- Dot Product of Two Vectors -- Matrix Working on a Vector -- Linear Independence of Vectors -- Rank of a Matrix -- Identity Matrix or Operator -- Determinant of a Matrix -- Interpretation of Determinant -- Inverse of a Matrix -- Norm of a Vector 
505 8 |a Pseudo-Inverse of a Matrix -- Unit Vector in the Direction of a Specific Vector -- Projection of a Vector in the Direction of Another Vector -- Eigen Vectors -- Characteristic Equation of a Matrix -- Power Iteration Method for Computing Eigen Vector -- Calculus -- Differentiation -- Gradient of a Function -- Successive Partial Derivatives -- Hessian Matrix of a Function -- Maxima and Minima of Functions -- Rules for Maxima and Minima for a Univariate Function -- Local Minima and Global Minima -- Positive Semi-definite and Positive Definite -- Convex Set -- Convex Function -- Non-convex Function 
505 8 |a Multivariate Convex and Non-convex Functions Examples -- Taylor Series -- Probability -- Unions, Intersection, and Conditional Probability -- Chain Rule of Probability for Intersection of Event -- Mutually Exclusive Events -- Independence of Events -- Conditional Independence of Events -- Bayes Rule -- Probability Mass Function -- Probability Density Function -- Expectation of a Random Variable -- Variance of a Random Variable -- Skewness and Kurtosis -- Covariance -- Correlation Coefficient -- Some Common Probability Distribution -- Uniform Distribution -- Normal Distribution 
505 8 |a Multivariate Normal Distribution -- Bernoulli Distribution -- Binomial Distribution -- Poisson Distribution -- Beta Distribution -- Dirichlet Distribution -- Gamma Distribution -- Likelihood Function -- Maximum Likelihood Estimate -- Hypothesis Testing and p Value -- Formulation of Machine-Learning Algorithm and Optimization Techniques -- Supervised Learning -- Linear Regression as a Supervised Learning Method -- Linear Regression Through Vector Space Approach -- Classification -- Hyperplanes and Linear Classifiers -- Unsupervised Learning -- Reinforcement Learning 
505 8 |a Optimization Techniques for Machine-Learning Gradient Descent -- Gradient Descent for a Multivariate Cost Function -- Contour Plot and Contour Lines -- Steepest Descent -- Stochastic Gradient Descent -- Newton's Method -- Linear Curve -- Negative Curvature -- Positive Curvature -- Constrained Optimization Problem -- A Few Important Topics in Machine Learning -- Dimensionality-Reduction Methods -- Principal Component Analysis -- When Will PCA Be Useful in Data Reduction? -- How Do You Know How Much Variance Is Retained by the Selected Principal Components? -- Singular Value Decomposition 
588 |a Description based on online resource; title from digital title page (viewed on January 17, 2023). 
590 |a O'Reilly  |b O'Reilly Online Learning: Academic/Public Library Edition 
630 0 0 |a TensorFlow (Electronic resource) 
650 0 |a Machine learning. 
650 0 |a Artificial intelligence. 
650 6 |a Apprentissage automatique. 
650 6 |a Intelligence artificielle. 
650 7 |a artificial intelligence.  |2 aat 
650 7 |a Artificial intelligence  |2 fast 
650 7 |a Machine learning  |2 fast 
776 0 8 |c Original  |z 1484289307  |z 9781484289303  |w (OCoLC)1345459702 
856 4 0 |u https://learning.oreilly.com/library/view/~/9781484289310/?ar  |z Texto completo (Requiere registro previo con correo institucional) 
938 |a Askews and Holts Library Services  |b ASKH  |n AH41098303 
938 |a ProQuest Ebook Central  |b EBLB  |n EBL7165965 
938 |a YBP Library Services  |b YANK  |n 19015891 
994 |a 92  |b IZTAP