Cargando…

TensorFlow Privacy : Learning with differential privacy for training data

When evaluating ML models, it can be difficult to tell the difference between what the models learned to generalize from training and what the models have simply memorized. And that difference can be crucial in some ML tasks, such as when ML models are trained using sensitive data. Recently, new tec...

Descripción completa

Detalles Bibliográficos
Autor principal: Erlingsson, Úlfar (Autor)
Autor Corporativo: Safari, an O'Reilly Media Company
Formato: Electrónico Video
Idioma:Inglés
Publicado: O'Reilly Media, Inc., 2020.
Edición:1st edition.
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)

MARC

LEADER 00000cgm a22000007a 4500
001 OR_on1143019133
003 OCoLC
005 20231017213018.0
006 m o c
007 cr cnu||||||||
007 vz czazuu
008 200220s2020 xx 041 vleng
040 |a AU@  |b eng  |c AU@  |d STF  |d NZCPL  |d OCLCF  |d OCLCO  |d OCLCQ 
019 |a 1193323400  |a 1232117608  |a 1305857351 
020 |z 0636920373469 
024 8 |a 0636920373483 
029 0 |a AU@  |b 000066786054 
035 |a (OCoLC)1143019133  |z (OCoLC)1193323400  |z (OCoLC)1232117608  |z (OCoLC)1305857351 
049 |a UAMI 
100 1 |a Erlingsson, Úlfar,  |e author. 
245 1 0 |a TensorFlow Privacy :  |b Learning with differential privacy for training data  |h [electronic resource] /  |c Erlingsson, Úlfar. 
250 |a 1st edition. 
264 1 |b O'Reilly Media, Inc.,  |c 2020. 
300 |a 1 online resource (1 video file, approximately 41 min.) 
336 |a two-dimensional moving image  |b tdi  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
347 |a video file 
520 |a When evaluating ML models, it can be difficult to tell the difference between what the models learned to generalize from training and what the models have simply memorized. And that difference can be crucial in some ML tasks, such as when ML models are trained using sensitive data. Recently, new techniques have emerged for differentially private training of ML models, including deep neural networks (DNNs), that used modified stochastic gradient descent to provide strong privacy guarantees for training data. Those techniques are now available, and they're both practical and can be easy to use. This said, they come with their own set of hyperparameters that need to be tuned, and they necessarily make learning less sensitive to outlier data in ways that are likely to slightly reduce utility. Úlfar Erlingsson explores the basics of ML privacy, introduces differential privacy and why it's considered a gold standard, explains the concrete use of ML privacy and the principled techniques behind it, and dives into intended and unintended memorization and how it differs from generalization. Prerequisite knowledge Experience using TensorFlow to train ML models A basic understanding of stochastic gradient descent What you'll learn Learn what it means to provide privacy guarantees for ML models and how such guarantees can be achieved in practice using TensorFlow Privacy. 
538 |a Mode of access: World Wide Web. 
542 |f Copyright © O'Reilly Media, Inc. 
550 |a Made available through: Safari, an O'Reilly Media Company. 
588 |a Online resource; Title from title screen (viewed February 28, 2020) 
533 |a Electronic reproduction.  |b Boston, MA :  |c Safari.  |n Available via World Wide Web. 
590 |a O'Reilly  |b O'Reilly Online Learning: Academic/Public Library Edition 
710 2 |a Safari, an O'Reilly Media Company. 
856 4 0 |u https://learning.oreilly.com/videos/~/0636920373483/?ar  |z Texto completo (Requiere registro previo con correo institucional) 
936 |a BATCHLOAD 
994 |a 92  |b IZTAP