|
|
|
|
LEADER |
00000cam a2200000 i 4500 |
001 |
OR_on1356522426 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr cnu---unuuu |
008 |
230106s2023 cau o 000 0 eng d |
040 |
|
|
|a YDX
|b eng
|e rda
|c YDX
|d ORMDA
|d EBLCP
|d YDX
|d GW5XE
|d UKAHL
|d N$T
|d OCLCF
|d OCLCO
|
019 |
|
|
|a 1356982291
|
020 |
|
|
|a 9781484286920
|q (electronic bk.)
|
020 |
|
|
|a 1484286928
|q (electronic bk.)
|
020 |
|
|
|z 148428691X
|
020 |
|
|
|z 9781484286913
|
024 |
7 |
|
|a 10.1007/978-1-4842-8692-0
|2 doi
|
029 |
1 |
|
|a AU@
|b 000073290290
|
035 |
|
|
|a (OCoLC)1356522426
|z (OCoLC)1356982291
|
037 |
|
|
|a 9781484286920
|b O'Reilly Media
|
050 |
|
4 |
|a Q325.5
|b .Y42 2023
|
072 |
|
7 |
|a UYQ
|2 bicssc
|
072 |
|
7 |
|a COM004000
|2 bisacsh
|
072 |
|
7 |
|a UYQ
|2 thema
|
082 |
0 |
4 |
|a 006.3/1
|2 23/eng/20230106
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Ye, Andre,
|e author.
|
245 |
1 |
0 |
|a Modern deep learning for tabular data :
|b novel approaches to common modeling problems /
|c Andre Ye, Zian Wang.
|
264 |
|
1 |
|a [Berkeley] :
|b Apress,
|c [2023]
|
300 |
|
|
|a 1 online resource
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
520 |
|
|
|a Deep learning is one of the most powerful tools in the modern artificial intelligence landscape. While having been predominantly applied to highly specialized image, text, and signal datasets, this book synthesizes and presents novel deep learning approaches to a seemingly unlikely domain - tabular data. Whether for finance, business, security, medicine, or countless other domain, deep learning can help mine and model complex patterns in tabular data - an incredibly ubiquitous form of structured data. Part I of the book offers a rigorous overview of machine learning principles, algorithms, and implementation skills relevant to holistically modeling and manipulating tabular data. Part II studies five dominant deep learning model designs - Artificial Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Attention and Transformers, and Tree-Rooted Networks - through both their 'default' usage and their application to tabular data. Part III compounds the power of the previously covered methods by surveying strategies and techniques to supercharge deep learning systems: autoencoders, deep data generation, meta-optimization, multi-model arrangement, and neural network interpretability. Each chapter comes with extensive visualization, code, and relevant research coverage. Modern Deep Learning for Tabular Data is one of the first of its kind - a wide exploration of deep learning theory and applications to tabular data, integrating and documenting novel methods and techniques in the field. This book provides a strong conceptual and theoretical toolkit to approach challenging tabular data problems. What You Will Learn Important concepts and developments in modern machine learning and deep learning, with a strong emphasis on tabular data applications. Understand the promising links between deep learning and tabular data, and when a deep learning approach is or isn't appropriate. Apply promising research and unique modeling approaches in real-world data contexts. Explore and engage with modern, research-backed theoretical advances on deep tabular modeling Utilize unique and successful preprocessing methods to prepare tabular data for successful modelling. Who This Book Is For Data scientists and researchers of all levels from beginner to advanced looking to level up results on tabular data with deep learning or to understand the theoretical and practical aspects of deep tabular modeling research. Applicable to readers seeking to apply deep learning to all sorts of complex tabular data contexts, including business, finance, medicine, education, and security.
|
505 |
0 |
|
|a Intro -- Table of Contents -- About the Authors -- About the Technical Reviewer -- Acknowledgments -- Foreword 1 -- Foreword 2 -- Introduction -- Part I: Machine Learning and Tabular Data -- Chapter 1: Classical Machine Learning Principles and Methods -- Fundamental Principles of Modeling -- What Is Modeling? -- Modes of Learning -- Quantitative Representations of Data: Regression and Classification -- The Machine Learning Data Cycle: Training, Validation, and Test Sets -- Bias-Variance Trade-Off -- Feature Space and the Curse of Dimensionality -- Optimization and Gradient Descent
|
505 |
8 |
|
|a Metrics and Evaluation -- Mean Absolute Error -- Mean Squared Error (MSE) -- Confusion Matrix -- Accuracy -- Precision -- Recall -- F1 Score -- Area Under the Receiver Operating Characteristics Curve (ROC-AUC) -- Algorithms -- K-Nearest Neighbors -- Theory and Intuition -- Implementation and Usage -- Linear Regression -- Theory and Intuition -- Implementation and Usage -- Other Variations on Simple Linear Regression -- Logistic Regression -- Theory and Intuition -- Implementation and Usage -- Other Variations on Logistic Regression -- Decision Trees -- Theory and Intuition
|
505 |
8 |
|
|a Implementation and Usage -- Random Forest -- Gradient Boosting -- Theory and Intuition -- AdaBoost -- XGBoost -- LightGBM -- Summary of Algorithms -- Thinking Past Classical Machine Learning -- Key Points -- Chapter 2: Data Preparation and Engineering -- Data Storage and Manipulation -- TensorFlow Datasets -- Creating a TensorFlow Dataset -- TensorFlow Sequence Datasets -- Handling Large Datasets -- Datasets That Fit in Memory -- Pickle -- SciPy and TensorFlow Sparse Matrices -- Datasets That Do Not Fit in Memory -- Pandas Chunker -- h5py -- NumPy Memory Map -- Data Encoding -- Discrete Data
|
505 |
8 |
|
|a Label Encoding -- One-Hot Encoding -- Binary Encoding -- Frequency Encoding -- Target Encoding -- Leave-One-Out Encoding -- James-Stein Encoding -- Weight of Evidence -- Continuous Data -- Min-Max Scaling -- Robust Scaling -- Standardization -- Text Data -- Keyword Search -- Raw Vectorization -- Bag of Words -- N-Grams -- TF-IDF -- Sentiment Extraction -- Word2Vec -- Time Data -- Geographical Data -- Feature Extraction -- Single- and Multi-feature Transformations -- Principal Component Analysis -- t-SNE -- Linear Discriminant Analysis -- Statistics-Based Engineering -- Feature Selection
|
505 |
8 |
|
|a Information Gain -- Variance Threshold -- High-Correlation Method -- Recursive Feature Elimination -- Permutation Importance -- LASSO Coefficient Selection -- Key Points -- Part II: Applied Deep Learning Architectures -- Chapter 3: Neural Networks and Tabular Data -- What Exactly Are Neural Networks? -- Neural Network Theory -- Starting with a Single Neuron -- Feed-Forward Operation -- Introduction to Keras -- Modeling with Keras -- Defining the Architecture -- Compiling the Model -- Training and Evaluation -- Loss Functions -- Math Behind Feed-Forward Operation -- Activation Functions
|
588 |
|
|
|a Description based on online resource; title from digital title page (viewed on January 18, 2023).
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Machine learning.
|
650 |
|
0 |
|a Mathematical models.
|
650 |
|
6 |
|a Apprentissage automatique.
|
650 |
|
7 |
|a Machine learning
|2 fast
|
650 |
|
7 |
|a Mathematical models
|2 fast
|
700 |
1 |
|
|a Wang, Zian,
|e author.
|
776 |
0 |
8 |
|i Print version:
|z 148428691X
|z 9781484286913
|w (OCoLC)1336954999
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781484286920/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH41098278
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL7165517
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 304074682
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 3512972
|
994 |
|
|
|a 92
|b IZTAP
|