|
|
|
|
LEADER |
00000cam a2200000 4500 |
001 |
OR_on1159041705 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m d |
007 |
cr ||||||||||| |
008 |
200621s2020 ncu o 000 0 eng d |
040 |
|
|
|a YDX
|b eng
|c YDX
|d EBLCP
|d N$T
|d UAB
|d OCLCF
|d NJT
|d UMI
|d OCLCO
|d UKAHL
|d OCLCQ
|d OCLCO
|
019 |
|
|
|a 1158802866
|a 1159170246
|a 1179125948
|a 1191752940
|a 1199337125
|
020 |
|
|
|a 1642959162
|q (electronic bk.)
|
020 |
|
|
|a 9781642959161
|q (electronic bk.)
|
020 |
|
|
|a 1642959170
|q (electronic bk.)
|
020 |
|
|
|a 9781642959178
|q (electronic bk.)
|
020 |
|
|
|z 9781642959727
|
020 |
|
|
|z 9781642959154
|
020 |
|
|
|z 9781642959185
|
029 |
1 |
|
|a AU@
|b 000073555894
|
035 |
|
|
|a (OCoLC)1159041705
|z (OCoLC)1158802866
|z (OCoLC)1159170246
|z (OCoLC)1179125948
|z (OCoLC)1191752940
|z (OCoLC)1199337125
|
037 |
|
|
|a CL0501000152
|b Safari Books Online
|
050 |
|
4 |
|a Q325.5
|
082 |
0 |
4 |
|a 006.3/1
|2 23
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Blanchard, Robert
|c (Data scientist),
|e author.
|
245 |
1 |
0 |
|a Deep learning for computer vision with SAS :
|b an introduction /
|c Robert Blanchard.
|
264 |
|
1 |
|a Cary, NC :
|b SAS Institute,
|c 2020.
|
300 |
|
|
|a 1 online resource
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
505 |
0 |
|
|a Intro -- Contents -- About This Book -- What Does This Book Cover? -- Is This Book for You? -- What Should You Know about the Examples? -- Software Used to Develop the Book's Content -- Example Code and Data -- We Want to Hear from You -- About The Author -- Introduction to Deep Learning -- Introduction to Neural Networks -- Biological Neurons -- Mathematical Neurons -- Figure 1.1: Multilayer Perceptron -- Deep Learning -- Table 1.1: Traditional Neural Networks versus Deep Learning -- Figure 1.2: Hyperbolic Tangent Function -- Figure 1.3: Rectified Linear Function
|
505 |
8 |
|
|a Figure 1.4: Exponential Linear Function -- Batch Gradient Descent -- Figure 1.5: Batch Gradient Descent -- Stochastic Gradient Descent -- Figure 1.6: Stochastic Gradient Descent -- Introduction to ADAM Optimization -- Weight Initialization -- Figure 1.7: Constant Variance (Standard Deviation = 1) -- Figure 1.8: Constant Variance (Standard Deviation =,, -- + ..≈. ) -- Regularization -- Figure 1.9: Regularization Techniques -- Batch Normalization -- Batch Normalization with Mini-Batches -- Traditional Neural Networks versus Deep Learning
|
505 |
8 |
|
|a Table 1.2: Comparison of Central Processing Units and Graphical Processing Units -- Deep Learning Actions -- Building a Deep Neural Network -- Table 1.3: Layer Types -- Training a Deep Learning CAS Action Model -- Demonstration 1: Loading and Modeling Data with Traditional Neural Network Methods -- Table 1.4: Develop Data Set Variables -- Figure 1.10: Results of the FREQ Procedure -- Figure 1.11: Results of the NNET Procedure -- Figure 1.12: Score Information -- Demonstration 2: Building and Training Deep Learning Neural Networks Using CASL Code
|
505 |
8 |
|
|a Figure 1.13: Transcription of the Model Architecture -- Figure 1.14: Model Shell and Layer Information -- Figure 1.15: Model Information -- Figure 1.15: Optimization History Table -- Figure 1.16: Model Information Details -- Convolutional Neural Networks -- Introduction to Convoluted Neural Networks -- Input Layers -- Figure 2.1: Convolutional Neural Network -- Figure 2.2: Grayscale Image Channel -- Figure 2.3: Color Image Channels -- Convolutional Layers -- Figure 2.4: Single-channel Convolution Without Kernel Flipping -- Using Filters -- Figure 2.5: Starting Position of the Filter
|
505 |
8 |
|
|a Figure 2.6: Products of the Entries Between the Filter and Input -- Figure 2.7: Range Movement Due to STRIDE Hyperparameter -- Figure 2.8: Feature Map with Filter Response at Every Spatial Position -- Figure 2.9: Filter Weights and Nonlinear Transformation -- Padding -- Figure 2.10: Feature Map Without Padding -- Figure 2.11: Feature Map with Padding -- Figure 2.12: Without Padding -- Figure 2.13: Automatic Padding with SAS -- Figure 2.14: SAS Automatically Adjusts for Non-Integer Feature Maps -- Feature Map Dimensions -- Figure 2.15: Feature Map Dimensions -- Pooling Layers
|
520 |
|
|
|a Discover deep learning and computer vision with SAS! Deep Learning for Computer Vision with SAS®: An Introduction introduces the pivotal components of deep learning. Readers will gain an in-depth understanding of how to build deep feedforward and convolutional neural networks, as well as variants of denoising autoencoders. Transfer learning is covered to help readers learn about this emerging field. Containing a mix of theory and application, this book will also briefly cover methods for customizing deep learning models to solve novel business problems or answer research questions. SAS program.
|
504 |
|
|
|a Includes bibliographical references.
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
630 |
0 |
0 |
|a SAS (Computer file)
|
630 |
0 |
7 |
|a SAS (Computer file)
|2 fast
|
650 |
|
0 |
|a Machine learning.
|
650 |
|
0 |
|a Computer vision.
|
650 |
|
6 |
|a Apprentissage automatique.
|
650 |
|
6 |
|a Vision par ordinateur.
|
650 |
|
7 |
|a Computer vision
|2 fast
|
650 |
|
7 |
|a Machine learning
|2 fast
|
776 |
0 |
8 |
|i Print version:
|a Blanchard, Robert
|t Deep Learning for Computer Vision with SAS : An Introduction
|d Cary, NC : SAS Institute,c2020
|z 9781642959154
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781642959178/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH38060015
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 16814967
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH37464840
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL6228801
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 301337494
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 2500795
|
994 |
|
|
|a 92
|b IZTAP
|