|
|
|
|
LEADER |
00000cam a22000007a 4500 |
001 |
OR_on1143014709 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr cnu|||||||| |
008 |
100220s2020 xx o 000 0 eng |
040 |
|
|
|a AU@
|b eng
|e pn
|c AU@
|d EBLCP
|d OCLCQ
|d LANGC
|d OCLCQ
|
020 |
|
|
|z 9781492073925
|
024 |
8 |
|
|a 9781492073918
|
029 |
0 |
|
|a AU@
|b 000066784366
|
035 |
|
|
|a (OCoLC)1143014709
|
082 |
0 |
4 |
|a 005.133
|q OCoLC
|2 23/eng/20230216
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Schneider, Jon,
|e author.
|
245 |
1 |
0 |
|a SRE with Java Microservices /
|c Schneider, Jon.
|
250 |
|
|
|a 1st edition.
|
264 |
|
1 |
|b O'Reilly Media, Inc.,
|c 2020.
|
300 |
|
|
|a 1 online resource (300 pages)
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a text file
|
520 |
|
|
|a In a microservices architecture, the whole is indeed greater than the sum of its parts. But in practice, individual microservices can inadvertently impact others and alter the end user experience. Effective microservices architectures require standardization on an organizational level with the help of a platform engineering team. This practical book provides a series of progressive steps that platform engineers can apply technically and organizationally to achieve highly resilient Java applications. Author Jon Schneider covers many effective SRE practices from companies leading the way in microservices adoption. You'll examine several patterns that were created after much trial and error in recent years, complete with Java code examples. Chapters are organized according to specific patterns, including: Application Metrics: Availability, debuggability, and Micrometer Debugging with observability: Three pillars of observability; components of a distributed trace Charting and alerting: Building effective charts; KPIs for Java microservices Safe multi-cloud delivery: Automated canary analysis Source code observability: The problem of dependencies; API utilization Traffic management: Concurrency of systems; platform, gateway, and client-side load balancing.
|
542 |
|
|
|f Copyright © 2020 Jon Schneider and Olga Kundzich
|
550 |
|
|
|a Made available through: Safari, an O'Reilly Media Company.
|
588 |
|
|
|a Online resource; Title from title page (viewed December 25, 2020)
|
505 |
0 |
|
|a Intro -- Foreword -- Preface -- My Journey -- Conventions Used in This Book -- O'Reilly Online Learning -- How to Contact Us -- Acknowledgments -- 1. The Application Platform -- Platform Engineering Culture -- Monitoring -- Monitoring for Availability -- Google's approach to SLOs -- A less formal approach to SLOs -- Monitoring as a Debugging Tool -- Learning to Expect Failure -- Effective Monitoring Builds Trust -- Delivery -- Traffic Management -- Capabilities Not Covered -- Testing Automation -- Chaos Engineering and Continuous Verification -- Configuration as Code
|
505 |
8 |
|
|a Encapsulating Capabilities -- Service Mesh -- Summary -- 2. Application Metrics -- Black Box Versus White Box Monitoring -- Dimensional Metrics -- Hierarchical Metrics -- Micrometer Meter Registries -- Creating Meters -- Naming Metrics -- Common Tags -- Classes of Meters -- Gauges -- Counters -- Timers -- "Count" Means "Throughput" -- "Count" and "Sum" Together Mean "Aggregable Average" -- Maximum Is a Decaying Signal That Isn't Aligned to the Push Interval -- The Sum of Sum Over an Interval -- The Base Unit of Time -- Using Timers -- Common Features of Latency Distributions
|
505 |
8 |
|
|a Percentiles/Quantiles -- Histograms -- Service Level Objective Boundaries -- Distribution Summaries -- Long Task Timers -- Choosing the Right Meter Type -- Controlling Cost -- Coordinated Omission -- Load Testing -- Meter Filters -- Deny/Accept Meters -- Transforming Metrics -- Configuring Distribution Statistics -- Separating Platform and Application Metrics -- Partitioning Metrics by Monitoring System -- Meter Binders -- Summary -- 3. Debugging with Observability -- The Three Pillars of Observability ... or Is It Two? -- Logs -- Distributed Tracing -- Metrics -- Which Telemetry Is Appropriate?
|
505 |
8 |
|
|a Components of a Distributed Trace -- Types of Distributed Tracing Instrumentation -- Manual Tracing -- Agent Tracing -- Framework Tracing -- Service Mesh Tracing -- Blended Tracing -- Sampling -- No Sampling -- Rate-Limiting Samplers -- Probabilistic Samplers -- Boundary Sampling -- Impact of Sampling on Anomaly Detection -- Distributed Tracing and Monoliths -- Correlation of Telemetry -- Metric to Trace Correlation -- Using Trace Context for Failure Injection and Experimentation -- Summary -- 4. Charting and Alerting -- Differences in Monitoring Systems
|
505 |
8 |
|
|a Effective Visualizations of Service Level Indicators -- Styles for Line Width and Shading -- Errors Versus Successes -- "Top k" Visualizations -- Prometheus Rate Interval Selection -- Gauges -- Counters -- Timers -- When to Stop Creating Dashboards -- Service Level Indicators for Every Java Microservice -- Errors -- Latency -- Server (inbound) requests -- Client (outbound) requests -- Garbage Collection Pause Times -- Max pause time -- Proportion of time spent in garbage collection -- The presence of any humongous allocation -- Heap Utilization
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
710 |
2 |
|
|a Safari, an O'Reilly Media Company.
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781492073918/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
936 |
|
|
|a BATCHLOAD
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL6320975
|
994 |
|
|
|a 92
|b IZTAP
|