|
|
|
|
LEADER |
00000cam a2200000Mu 4500 |
001 |
OR_on1202550935 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m d |
007 |
cr |n||||||||| |
008 |
201011s2020 xx o ||| 0 und d |
040 |
|
|
|a VT2
|b eng
|c VT2
|d EBLCP
|d TOH
|d OCLCQ
|d LANGC
|d OCLCQ
|
020 |
|
|
|a 9781492083290
|
020 |
|
|
|a 1492083291
|
035 |
|
|
|a (OCoLC)1202550935
|
082 |
0 |
4 |
|a 006.3/1
|q OCoLC
|2 23/eng/20230216
|
049 |
|
|
|a UAMI
|
245 |
0 |
0 |
|a Introducing MLOps
|h [electronic resource] /
|c Léo Dreyfus-Schmidt ... [et. al.].
|
260 |
|
|
|a [S.l.] :
|b O'Reilly Media, Inc.,
|c 2020.
|
300 |
|
|
|a 1 online resource
|
500 |
|
|
|a Title from content provider.
|
505 |
0 |
|
|a Cover -- Copyright -- Table of Contents -- Preface -- Who This Book Is For -- How This Book Is Organized -- Conventions Used in This Book -- O'Reilly Online Learning -- How to Contact Us -- Acknowledgments -- Part I. MLOps: What and Why -- Chapter 1. Why Now and Challenges -- Defining MLOps and Its Challenges -- MLOps to Mitigate Risk -- Risk Assessment -- Risk Mitigation -- MLOps for Responsible AI -- MLOps for Scale -- Closing Thoughts -- Chapter 2. People of MLOps -- Subject Matter Experts -- Data Scientists -- Data Engineers -- Software Engineers -- DevOps -- Model Risk Manager/Auditor
|
505 |
8 |
|
|a Machine Learning Architect -- Closing Thoughts -- Chapter 3. Key MLOps Features -- A Primer on Machine Learning -- Model Development -- Establishing Business Objectives -- Data Sources and Exploratory Data Analysis -- Feature Engineering and Selection -- Training and Evaluation -- Reproducibility -- Responsible AI -- Productionalization and Deployment -- Model Deployment Types and Contents -- Model Deployment Requirements -- Monitoring -- DevOps Concerns -- Data Scientist Concerns -- Business Concerns -- Iteration and Life Cycle -- Iteration -- The Feedback Loop -- Governance -- Data Governance
|
505 |
8 |
|
|a Process Governance -- Closing Thoughts -- Part II. MLOps: How -- Chapter 4. Developing Models -- What Is a Machine Learning Model? -- In Theory -- In Practice -- Required Components -- Different ML Algorithms, Different MLOps Challenges -- Data Exploration -- Feature Engineering and Selection -- Feature Engineering Techniques -- How Feature Selection Impacts MLOps Strategy -- Experimentation -- Evaluating and Comparing Models -- Choosing Evaluation Metrics -- Cross-Checking Model Behavior -- Impact of Responsible AI on Modeling -- Version Management and Reproducibility -- Closing Thoughts
|
505 |
8 |
|
|a Chapter 5. Preparing for Production -- Runtime Environments -- Adaptation from Development to Production Environments -- Data Access Before Validation and Launch to Production -- Final Thoughts on Runtime Environments -- Model Risk Evaluation -- The Purpose of Model Validation -- The Origins of ML Model Risk -- Quality Assurance for Machine Learning -- Key Testing Considerations -- Reproducibility and Auditability -- Machine Learning Security -- Adversarial Attacks -- Other Vulnerabilities -- Model Risk Mitigation -- Changing Environments -- Interactions Between Models -- Model Misbehavior
|
505 |
8 |
|
|a Closing Thoughts -- Chapter 6. Deploying to Production -- CI/CD Pipelines -- Building ML Artifacts -- What's in an ML Artifact? -- The Testing Pipeline -- Deployment Strategies -- Categories of Model Deployment -- Considerations When Sending Models to Production -- Maintenance in Production -- Containerization -- Scaling Deployments -- Requirements and Challenges -- Closing Thoughts -- Chapter 7. Monitoring and Feedback Loop -- How Often Should Models Be Retrained? -- Understanding Model Degradation -- Ground Truth Evaluation -- Input Drift Detection -- Drift Detection in Practice
|
520 |
|
|
|a More than half of the analytics and machine learning (ML) models created by organizations today never make it into production. Instead, many of these ML models do nothing more than provide static insights in a slideshow. If they aren't truly operational, these models can't possibly do what you've trained them to do. This book introduces practical concepts to help data scientists and application engineers operationalize ML models to drive real business change. Through lessons based on numerous projects around the world, six experts in data analytics provide an applied four-step approach-Build, Manage, Deploy and Integrate, and Monitor-for creating ML-infused applications within your organization. You'll learn how to: Fulfill data science value by reducing friction throughout ML pipelines and workflows Constantly refine ML models through retraining, periodic tuning, and even complete remodeling to ensure long-term accuracy Design the ML Ops lifecycle to ensure that people-facing models are unbiased, fair, and explainable Operationalize ML models not only for pipeline deployment but also for external business systems that are more complex and less standardized Put the four-step Build, Manage, Deploy and Integrate, and Monitor approach into action.
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
700 |
1 |
|
|a Dreyfus-Schmidt, Léo.
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781492083283/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL6417152
|
994 |
|
|
|a 92
|b IZTAP
|