Loading…

Strengthening Deep Neural Networks : Making AI Less Susceptible to Adversarial Trickery /

As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll...

Full description

Bibliographic Details
Call Number:Libro Electrónico
Main Author: Warr, Katy (Author)
Corporate Author: Safari, an O'Reilly Media Company
Format: Electronic eBook
Language:Inglés
Published: Sebastopol : O'Reilly Media, Incorporated, 2019.
Edition:First edition.
Subjects:
Online Access:Texto completo (Requiere registro previo con correo institucional)
Table of Contents:
  • Part 1. An introduction to fooling AI. Introduction
  • Attack motivations
  • Deep neural network (DNN) fundamentals
  • DNN processing for image, audio, and video
  • Part 2. Generating adversarial input. The principles of adversarial input
  • Methods for generating adversarial perturbation
  • Part 3. Understanding the real-world threat. Attack patterns for real-world systems
  • Physical-world attacks
  • Part 4. Defense. Evaluating model robustness to adversarial inputs
  • Defending against adversarial inputs
  • Future trends : toward robust AI
  • Mathematics terminology reference.