Description
Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you.
- Delve into DNNs and discover how they could be tricked by adversarial input
- Investigate methods used to generate adversarial input capable of fooling DNNs
- Explore real-world scenarios and model the adversarial threat
- Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data
- Examine some ways in which AI might become better at mimicking human perception in years to comeAs deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately “fool” them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.
Reviews
There are no reviews yet.