Shadow AI and the Donut of Defense: A Practical Guide to Securing AI Systems

CYBERSECURITY + EVOLVING THREAT LANDSCAPE + FEATURED + Artificial Intelligence Ervin Daniels todayAugust 1, 2025

Background
share close

As organizations accelerate their adoption of generative AI, they face increasing challenges in securing sensitive data, protecting model integrity, and managing the use of AI systems across the enterprise. The primary goal of using AI is to ensure it functions within clearly defined guardrails, ensuring proper governance, monitoring, and security.

AI can introduce organizational risks when used without the proper oversight, including bias, inaccuracy, and lack of transparency. When unsanctioned or unmanaged AI, often referred to as Shadow AI, is deployed, it may operate on outdated or unrepresentative data, make decisions that lack context, and leave no audit trail. This results in outcomes that are not only unfair or inaccurate but also untraceable and unaccountable, posing serious ethical, operational, and compliance concerns.

Organizational Risks:

  1. Security Risk: AI systems lacking oversight, visibility, and secure-by-design architecture are more susceptible to breaches, misuse, and unintended exposure of sensitive information.
  2. Data Privacy Risk: Unregulated access to customers or sensitive data by AI systems without proper privacy safeguards increases the risk of misuse, data leakage, and regulatory non-compliance.
  3. Shadow AI Risk: AI systems operating outside of the approved model risk management process pose significant risks by bypassing established security, compliance, and risk management controls.

In this post, we explore how Shadow AI emerges, its dangers, and how to secure your AI ecosystem using a layered “Donut of Defense” model across four core capabilities: Discover, Assess, Control, and Report.

What Is Shadow AI?

Shadow AI refers to artificial intelligence (AI) systems or machine learning models that are implemented and used within an organization without the knowledge or approval of the central IT, security, or governance teams. Individual departments or business units often deploy these systems to address specific needs or enhance productivity; however, they typically operate outside the organization’s established frameworks for governance, security, and compliance.

Shadow AI includes:

  • Business units using generative AI tools like ChatGPT for sensitive data processing
  • Developers downloading and deploying third-party models (e.g., from Hugging Face)
  • Internal teams are training models without proper risk assessment or governance

The issue isn’t just the existence of Shadow AI; it’s the lack of visibility, control, and security around these deployments.

Why Shadow AI Is a Growing Threat

Generative AI is now widely accessible through web-based tools, APIs, and open-source models, making it easier than ever for anyone to build and deploy AI. However, when development and data usage occur without proper governance, they introduce significant business risk.

AI systems are not like traditional software. They:

  • Learn from sensitive data
  • Interact with external users via prompts
  • Make decisions that impact real-world outcomes

These AI risks open new threat vectors like:

With Shadow AI, none of these risks are visible until it’s too late.

Introducing the Donut of Defense: A Layered AI Security Strategy

To combat Shadow AI and secure AI workloads, a layered defense model, dubbed the “Donut of Defense,” is required. This approach surrounds the AI system with layered security capabilities aligned with organizational governance.

Donut of Defense

Phase 1: Discover – Gain Full Visibility

You can’t protect what you don’t know exists. Organizations are deploying AI use cases and models across multiple applications, platforms, and environments, but often lack visibility into these deployments.

Start by:

  • Performing agentless discovery of AI workloads across cloud and on-prem infrastructure
  • Creating a complete inventory of known and unknown (Shadow AI) systems
  • Logging every model, dataset, and endpoint used in AI workflows in a centralized repository
  • Automated and continuous monitoring of AI systems across environments across multi-cloud, multi-vendor services (e.g., embedded AI, and co-pilots).

Automated AI discovery tools ensure your security program is rooted in visibility, the foundation of any trustworthy AI ecosystem.

Phase 2: Assess – Identify Risk and Vulnerabilities

Once discovered, AI systems must be continuously evaluated for:

  • Unintended exposure of sensitive training data or sources used for RAG (Retrieval Augmented Generation).
  • Inadequate control over AI access and behavior
  • Vulnerabilities and misconfigurations
  • Model risk from imported third-party sources

Tools like penetration testing for AI models and AI Security Posture Management (AI-SPM) can help you prioritize and remediate critical exposures.

Phase 3: Control – Apply Guardrails and Protection

AI gateways play a vital role in:

  • Mediating user prompts
  • Detecting prompt injection and jailbreak attempts
  • Enforcing guardrails and usage policies

Controls must extend to data governance—especially preventing sensitive data from being leaked via model responses or training sets.

Phase 4: Report – Centralized Dashboards and Compliance

Dashboards help teams:

  • Visualize risks in context.
  • Track AI system health and exposures
  • Generate audit-ready reports to support frameworks like MITRE ATLAS and OWASP Top 10 for LLMs and map to security standards.
  • Comply with regulations and map assessment frameworks, including  EU AI, NIST RMF, DASF, ISO, and many others.

Proper reporting also empowers executive and AI governance teams to gain a comprehensive view of business risk and security, enabling them to make informed decisions about AI usage and risk tolerance.

Bringing It All Together

The rise of AI demands a parallel surge in AI-specific security capabilities. Relying on traditional security tooling isn’t enough. AI introduces new types of risk that require new layers of defense.

The “Donut of Defense” is more than a model; it’s a practical approach to discovering, assessing, controlling, and reporting on AI systems, ensuring innovation doesn’t come at the cost of risk.

Shadow AI isn’t going away. But with the right security strategy in place, you can turn that shadow into visibility—and that risk into resilience.

Next Steps for Security Leaders

  • Run an AI discovery scan across your environment.
  • Evaluate current security controls for generative AI usage.
  • Align AI practices with a risk and compliance framework (e.g., MITRE ATLAS, OWASP LLM Top 10).
  • Establish an AI governance council to ensure visibility and adherence to policy.

Want to learn more about protecting your AI systems? Follow this blog for future breakdowns on AI security threats, AI supply chain risks, and how to integrate security for AI with your existing security stack.

Ervin Daniels

Written by: Ervin Daniels

Rate it
About the author
Avatar

Ervin Daniels

Cybersecurity Architect with over 25 years of Technology and Security leadership and hands-on experience across various industries (retail, public, financial services, and technology).


Previous post

Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

©2020 Ervin Daniels. Designed By Tru Brand Media Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of IBM.

error: Content is protected !!