|
|
|
|
LEADER |
00000cam a2200000 i 4500 |
001 |
OR_on1273425900 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr ||||||||||| |
008 |
210830t20222022njua ob 001 0 eng |
010 |
|
|
|a 2021041541
|
040 |
|
|
|a DLC
|b eng
|e rda
|c DLC
|d OCLCO
|d OCLCF
|d YDX
|d DG1
|d YDX
|d NLW
|d OCLCO
|d OCLCQ
|d UPM
|d OCLCQ
|d ORMDA
|d OCL
|d LANGC
|d OCLCQ
|
019 |
|
|
|a 1273049474
|
020 |
|
|
|a 9781119716730
|q electronic book
|
020 |
|
|
|a 111971673X
|q electronic book
|
020 |
|
|
|a 9781119716570
|q electronic book
|
020 |
|
|
|a 1119716578
|q electronic book
|
020 |
|
|
|a 9781119716761
|q electronic book
|
020 |
|
|
|a 1119716764
|q electronic book
|
020 |
|
|
|z 9781119716747
|q hardcover
|
029 |
1 |
|
|a AU@
|b 000069963468
|
035 |
|
|
|a (OCoLC)1273425900
|z (OCoLC)1273049474
|
037 |
|
|
|a 9781119716747
|b O'Reilly Media
|
042 |
|
|
|a pcc
|
050 |
0 |
4 |
|a Q325.5
|b .W558 2022
|
082 |
0 |
0 |
|a 006.3/1
|2 23/eng/20210927
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Winters-Hilt, Stephen,
|e author.
|
245 |
1 |
0 |
|a Informatics and machine learning :
|b from Martingales to metaheuristics /
|c Stephen Winters-Hilt.
|
264 |
|
1 |
|a Hoboken, NJ :
|b John Wiley & Sons, Inc.,
|c 2022.
|
264 |
|
4 |
|c Ã2022
|
300 |
|
|
|a 1 online resource (xv, 566 pages) :
|b illustrations
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
504 |
|
|
|a Includes bibliographical references and index.
|
520 |
|
|
|a "This book provides an interdisciplinary presentation on machine learning, bioinformatics and statistics. This book is an accumulation of lecture notes and interesting research tidbits from over two decades of the author's teaching experience. The chapters in this book can be traversed in different ways for different course offerings. In the classroom, the trend is moving towards hands-on work with running code. Therefore, the author provides lots of sample code to explicitly explain and provide example-based code for various levels of project work. This book is especially useful for professionals entering the rapidly growing Machine Learning field due to its complete presentation of the mathematical underpinnings and extensive examples of programming implementations. Many Machine Learning (ML) textbooks miss a strong intro/basis in terms of information theory. Using mutual information alone, for example, a genome's encoding scheme can be 'cracked' with less than one page of Python code. On the implementation side, many ML professional/reference texts often do not shown how to actually access raw data files and reformat the data into some more usable form. Methods and implementations to do this are described in the proposed text, where most code examples are in Python (some in C/C++, Perl, and Java, as well). Once the data is in hand all sorts of fun analytics and advanced machine learning tools can be brought to bear."--
|c Provided by publisher.
|
588 |
|
|
|a Description based on online resource; title from digital title page (viewed on January 25, 2022).
|
505 |
0 |
|
|a Calculus ... Python (or Perl) and Linux 2 1.2 Informatics and Data Analytics 3 1.3 FSA-Based Signal Acquisition and Bioinformatics 4 1.4 Feature Extraction and Language Analytics 7 1.5 Feature Extraction and Gene Structure Identification 8 1.5.1 HMMs for Analysis of Information Encoding Molecules 11 1.5.2 HMMs for Cheminformatics and Generic Signal Analysis 11 1.6 Theoretical Foundations for Learning 13 1.7 Classification and Clustering 13 1.8 Search 14 1.9 Stochastic Sequential Analysis (SSA) Protocol (Deep Learning Without NNs) 15 1.9.1 Stochastic Carrier Wave (SCW) Analysis - Nanoscope Signal Analysis 18 1.9.2 Nanoscope Cheminformatics - A Case Study for Device "Smartening" 19 1.10 Deep Learning using Neural Nets 20 1.11 Mathematical Specifics and Computational Implementations 21 2 Probabilistic Reasoning and Bioinformatics 23 2.1 Python Shell Scripting
|
505 |
0 |
|
|a Wikipedia 125 5.1.1.2 Library of Babel 126 5.1.1.3 Weather Scraper 127 5.1.1.4 Stock Scraper - New-Style with Cookies 128 5.1.2 Word Frequency Analysis: Machiavelli's Polysemy on Fortuna and Virtu 130 5.1.3 Word Frequency Analysis: Coleridge's Hidden Polysemy on Logos 139 5.1.4 Sentiment Analysis 143 5.2 Phrases - Short (Three Words) 145 5.2.1 Shakespearean Insult Generation - Phrase Generation 147 5.3 Phrases - Long (A Line or Sentence) 150 5.3.1 Iambic Phrase Analysis: Shakespeare 150 5.3.2 Natural Language Processing 152 5.3.3 Sentence and Story Generation: Tarot 152 5.4 Exercises 153 6 Analysis of Sequential Data Using HMMs 155 6.1 Hidden Markov Models (HMMs) 155 6.1.1 Background and Role in Stochastic Sequential Analysis (SSA) 155 6.1.2 When to Use a Hidden Markov Model (HMM)? 160 6.1.3 Hidden Markov Models (HMMs) - Standard
|
505 |
0 |
|
|a Formulation and Terms 161 6.2 Graphical Models for Markov Models and Hidden Markov Models 162 6.2.1 Hidden Markov Models 162 6.2.2 Viterbi Path 163 6.2.2.1 The Most Probable State Sequence 164 6.2.3 Forward and Backward Probabilities 164 6.2.4 HMM: Maximum Likelihood discrimination 165 6.2.5 Expectation/Maximization (Baum-Welch) 166 6.2.5.1 Emission and Transition Expectations with Rescaling 167 6.3 Standard HMM Weaknesses and their GHMM Fixes 168 6.4 Generalized HMMs (GHMMs - "Gems"): Minor Viterbi Variants 171 6.4.1 The Generic HMM 171 6.4.2 pMM/SVM 171 6.4.3 EM and Feature Extraction via EVA Projection 172 6.4.4 Feature Extraction via Data Absorption (a.k.a.
|
505 |
0 |
|
|a (Further details in Appendix) 232 7.6 Exercises 234 8 Neuromanifolds and the Uniqueness of Relative Entropy 235 8.1 Overview 235 8.2 Review of Differential Geometry 236 8.2.1 Differential Topology - Natural Manifold 236 8.2.2 Differential Geometry - Natural Geometric Structures 240 8.3 Amari's Dually Flat Formulation 243 8.3.1 Generalization of Pythagorean Theorem 246 8.3.2 Projection Theorem and Relation Between Divergence and Link Formalism 246 8.4 Neuromanifolds 247 8.5 Exercises 250 9 Neural Net Learning and Loss Bounds Analysis 253 9.1 Brief Introduction to Neural Nets (NNs) 254 9.1.1 Single Neuron Discriminator 254 9.1.1.1 The Perceptron 254 9.1.1.2 Sigmoid Neurons 256 9.1.1.3 The Loss Function and Gradient Descent 257 9.1.2 Neural Net with Back-Propagation 258 9.1.2.1 The Loss Function - General Activation in a General Neural
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Machine learning.
|
650 |
|
0 |
|a Computer science.
|
650 |
|
0 |
|a Bioinformatics.
|
650 |
|
0 |
|a Electronic data processing.
|
650 |
|
0 |
|a Computational biology.
|
650 |
|
6 |
|a Apprentissage automatique.
|
650 |
|
6 |
|a Informatique.
|
650 |
|
6 |
|a Bio-informatique.
|
650 |
|
7 |
|a data processing.
|2 aat
|
650 |
|
7 |
|a computer science.
|2 aat
|
650 |
|
7 |
|a Electronic data processing.
|2 fast
|0 (OCoLC)fst00906956
|
650 |
|
7 |
|a Computational biology.
|2 fast
|0 (OCoLC)fst00871990
|
650 |
|
7 |
|a Bioinformatics.
|2 fast
|0 (OCoLC)fst00832181
|
650 |
|
7 |
|a Computer science.
|2 fast
|0 (OCoLC)fst00872451
|
650 |
|
7 |
|a Machine learning.
|2 fast
|0 (OCoLC)fst01004795
|
776 |
0 |
8 |
|i Print version:
|a Winters-Hilt, Stephen.
|t Informatics and machine learning
|d Hoboken, NJ : Wiley, 2021
|z 9781119716747
|w (DLC) 2021041540
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781119716747/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 17776298
|
994 |
|
|
|a 92
|b IZTAP
|