Strengthening Deep Neural Networks : Making AI Less Susceptible to Adversarial Trickery /
As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll...
Clasificación: | Libro Electrónico |
---|---|
Autor principal: | |
Autor Corporativo: | |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
Sebastopol :
O'Reilly Media, Incorporated,
2019.
|
Edición: | First edition. |
Temas: | |
Acceso en línea: | Texto completo (Requiere registro previo con correo institucional) |
Sumario: | As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust |
---|---|
Descripción Física: | 1 online resource (250 pages) |
Bibliografía: | Includes bibliographical references and index. |
ISBN: | 9781492044925 149204492X 9781492044901 1492044903 |