Adversarial AI Threat Response and Secure Model Design
62,99 €
Sofort verfügbar, Lieferzeit: Sofort lieferbar
Adversarial AI Threat Response and Secure Model Design, Apress
Practical Techniques for Detecting, Preventing, and Managing AI Vulnerabilities
Von Goran Trajkovski, im heise shop in digitaler Fassung erhältlich
Produktinformationen "Adversarial AI Threat Response and Secure Model Design"
As artificial intelligence becomes embedded in everything from healthcare
diagnostics to financial systems and autonomous vehicles, the stakes for AI
security have never been higher. Adversarial AI Threat Response and Secure
Model Design is your essential guide to understanding, defending against,
and designing resilient machine learning systems in the face of growing
adversarial threats.
Written by a leading expert in AI security and policy, this book delivers a
combination of technical depth, practical implementation, and strategic insight.
It begins by mapping the full landscape of adversarial threats—evasion,
poisoning, model extraction, backdoors, and more—across diverse data
modalities and real-world applications. From there, it equips readers with a
robust toolkit of detection and defense techniques, including adversarial
training, anomaly detection, and formal robustness certification.
But this book goes beyond code. It explores the organizational, ethical, and
regulatory dimensions of AI security, offering guidance on risk quantification,
explainability, and compliance with frameworks like the EU AI Act. With hands-on
projects, open-source tools, and case studies in high-stakes domains, readers
will learn to design secure-by-default systems that are not only technically
sound but socially responsible.
Whether you're an AI engineer deploying models in production, a cybersecurity
professional defending intelligent systems, or an educator preparing the next
generation of AI talent, this book provides the clarity, rigor, and foresight
needed to stay ahead of adversarial threats. It’s not just a
reference—it’s a roadmap for building trustworthy AI.
What You Will Learn:
- Understand the full spectrum of adversarial threats to AI systems, including evasion, poisoning, backdoor injection, and model extraction, across vision, language, and multimodal applications.
- Apply practical detection and defense techniques using real tools and code, including adversarial training, statistical anomaly detection, input preprocessing, and ensemble defenses.
- Evaluate and balance trade-offs between accuracy, robustness, performance, and interpretability in the design of secure machine learning systems.
- Navigate the regulatory, ethical, and risk management challenges associated with adversarial AI, including disclosure practices, auditability, and compliance with emerging AI laws.
- Design, implement, and test secure-by-design AI solutions through hands-on projects and real-world case studies that span sectors such as healthcare, finance, and autonomous systems.
Artikel-Details
- Anbieter:
- Apress
- Autor:
- Goran Trajkovski
- Artikelnummer:
- 9798868823084
- Veröffentlicht:
- 15.04.26
Barrierefreiheit
This PDF has been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation
- entspricht den Vorgaben der PDF / UA 1 (05)
- keine Vorlesefunktionen des Lesesystems deaktiviert (bis auf) (10)
- navigierbares Inhaltsverzeichnis (11)
- logische Lesereihenfolge eingehalten (13)
- kurze Alternativtexte (z.B für Abbildungen) vorhanden (14)
- Inhalt auch ohne Farbwahrnehmung verständlich dargestellt (25)
- hoher Kontrast zwischen Text und Hintergrund (26)
- Navigation über vor-/zurück-Elemente (29)
- alle zum Verständnis notwendigen Inhalte über Screenreader zugänglich (52)
- Kontakt zum Herausgeber für weitere Informationen zur Barrierefreiheit (99)