Strengthening Deep Neural Networks : Making AI Less Susceptible to Adversarial Trickery /
As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll...
Clasificación: | Libro Electrónico |
---|---|
Autor principal: | |
Autor Corporativo: | |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
Sebastopol :
O'Reilly Media, Incorporated,
2019.
|
Edición: | First edition. |
Temas: | |
Acceso en línea: | Texto completo (Requiere registro previo con correo institucional) |
Tabla de Contenidos:
- Part 1. An introduction to fooling AI. Introduction
- Attack motivations
- Deep neural network (DNN) fundamentals
- DNN processing for image, audio, and video
- Part 2. Generating adversarial input. The principles of adversarial input
- Methods for generating adversarial perturbation
- Part 3. Understanding the real-world threat. Attack patterns for real-world systems
- Physical-world attacks
- Part 4. Defense. Evaluating model robustness to adversarial inputs
- Defending against adversarial inputs
- Future trends : toward robust AI
- Mathematics terminology reference.