|
|
|
|
LEADER |
00000cgm a22000007a 4500 |
001 |
OR_on1143019008 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o c |
007 |
cr cnu|||||||| |
007 |
vz czazuu |
008 |
200220s2020 xx 036 vleng |
040 |
|
|
|a AU@
|b eng
|c AU@
|d UMI
|d UAB
|d OCLCF
|d TOH
|d OCLCO
|d FZL
|d OCLCQ
|
019 |
|
|
|a 1176539492
|a 1191047501
|a 1224591225
|a 1232111241
|a 1256358097
|a 1305891702
|a 1351584477
|a 1380766277
|
020 |
|
|
|z 0636920373377
|
024 |
8 |
|
|a 0636920373391
|
029 |
0 |
|
|a AU@
|b 000066785983
|
035 |
|
|
|a (OCoLC)1143019008
|z (OCoLC)1176539492
|z (OCoLC)1191047501
|z (OCoLC)1224591225
|z (OCoLC)1232111241
|z (OCoLC)1256358097
|z (OCoLC)1305891702
|z (OCoLC)1351584477
|z (OCoLC)1380766277
|
037 |
|
|
|a CL0501000126
|b Safari Books Online
|
050 |
|
4 |
|a Q325.5
|
082 |
0 |
4 |
|a E VIDEO
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Doshi, Tulsee,
|e author.
|
245 |
1 |
0 |
|a Build more inclusive TensorFlow pipelines with fairness indicators
|h [electronic resource] /
|c Doshi, Tulsee.
|
250 |
|
|
|a 1st edition.
|
264 |
|
1 |
|b O'Reilly Media, Inc.,
|c 2020.
|
300 |
|
|
|a 1 online resource (1 video file, approximately 36 min.)
|
336 |
|
|
|a two-dimensional moving image
|b tdi
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
344 |
|
|
|a digital
|2 rdatr
|
347 |
|
|
|a video file
|
520 |
|
|
|a Machine learning (ML) continues to drive monumental change across products and industries. But as we expand the reach of ML to even more sectors and users, it's ever more critical to ensure that these pipelines work well for all users. Tulsee Doshi and Christina Greer outline their insights from their work in proactively building for fairness, using case studies built from Google products. They also explain the metrics that have been fundamental in evaluating their models at scale and the techniques that have proven valuable in driving improvements. Tulsee and Christina announce the launch of Fairness Indicators and demonstrate how the product can help with more inclusive development. Fairness Indicators is a new feature built into TensorFlow Extended (TFX) and on top of TensorFlow Model Analysis. Fairness Indicators enables developers to compute metrics that identify common fairness risks and drive improvements. You'll leave with an awareness of how algorithmic bias might manifest in your product, the ways you could measure and improve performance, and how Google's Fairness Indicators can help. Prerequisite knowledge A basic understanding of TensorFlow (useful but not required) What you'll learn Learn how to tactically identify and evaluate ML fairness risks using Fairness Indicators.
|
538 |
|
|
|a Mode of access: World Wide Web.
|
542 |
|
|
|f Copyright © O'Reilly Media, Inc.
|
550 |
|
|
|a Made available through: Safari, an O'Reilly Media Company.
|
588 |
|
|
|a Online resource; Title from title screen (viewed February 28, 2020)
|
511 |
0 |
|
|a Presenter, Tulsee Doshi, Christina Greer.
|
533 |
|
|
|a Electronic reproduction.
|b Boston, MA :
|c Safari.
|n Available via World Wide Web.
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
655 |
|
4 |
|a Electronic videos.
|
700 |
1 |
|
|a Greer, Christina,
|e author.
|
710 |
2 |
|
|a Safari, an O'Reilly Media Company.
|
856 |
4 |
0 |
|u https://learning.oreilly.com/videos/~/0636920373391/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
936 |
|
|
|a BATCHLOAD
|
994 |
|
|
|a 92
|b IZTAP
|