CYBERSECURITY + EVOLVING THREAT LANDSCAPE + FEATURED + Artificial Intelligence Ervin Daniels todayAugust 1, 2025
As organizations accelerate their adoption of generative AI, they face increasing challenges in securing sensitive data, protecting model integrity, and managing the use of AI systems across the enterprise. The primary goal of using AI is to ensure it functions within clearly defined guardrails, ensuring proper governance, monitoring, and security.
AI can introduce organizational risks when used without the proper oversight, including bias, inaccuracy, and lack of transparency. When unsanctioned or unmanaged AI, often referred to as Shadow AI, is deployed, it may operate on outdated or unrepresentative data, make decisions that lack context, and leave no audit trail. This results in outcomes that are not only unfair or inaccurate but also untraceable and unaccountable, posing serious ethical, operational, and compliance concerns.
In this post, we explore how Shadow AI emerges, its dangers, and how to secure your AI ecosystem using a layered “Donut of Defense” model across four core capabilities: Discover, Assess, Control, and Report.
Shadow AI refers to artificial intelligence (AI) systems or machine learning models that are implemented and used within an organization without the knowledge or approval of the central IT, security, or governance teams. Individual departments or business units often deploy these systems to address specific needs or enhance productivity; however, they typically operate outside the organization’s established frameworks for governance, security, and compliance.
Shadow AI includes:
The issue isn’t just the existence of Shadow AI; it’s the lack of visibility, control, and security around these deployments.
Generative AI is now widely accessible through web-based tools, APIs, and open-source models, making it easier than ever for anyone to build and deploy AI. However, when development and data usage occur without proper governance, they introduce significant business risk.
AI systems are not like traditional software. They:
These AI risks open new threat vectors like:
With Shadow AI, none of these risks are visible until it’s too late.
To combat Shadow AI and secure AI workloads, a layered defense model, dubbed the “Donut of Defense,” is required. This approach surrounds the AI system with layered security capabilities aligned with organizational governance.
You can’t protect what you don’t know exists. Organizations are deploying AI use cases and models across multiple applications, platforms, and environments, but often lack visibility into these deployments.
Start by:
Automated AI discovery tools ensure your security program is rooted in visibility, the foundation of any trustworthy AI ecosystem.
Once discovered, AI systems must be continuously evaluated for:
Tools like penetration testing for AI models and AI Security Posture Management (AI-SPM) can help you prioritize and remediate critical exposures.
AI gateways play a vital role in:
Controls must extend to data governance—especially preventing sensitive data from being leaked via model responses or training sets.
Dashboards help teams:
Proper reporting also empowers executive and AI governance teams to gain a comprehensive view of business risk and security, enabling them to make informed decisions about AI usage and risk tolerance.
The rise of AI demands a parallel surge in AI-specific security capabilities. Relying on traditional security tooling isn’t enough. AI introduces new types of risk that require new layers of defense.
The “Donut of Defense” is more than a model; it’s a practical approach to discovering, assessing, controlling, and reporting on AI systems, ensuring innovation doesn’t come at the cost of risk.
Shadow AI isn’t going away. But with the right security strategy in place, you can turn that shadow into visibility—and that risk into resilience.
Want to learn more about protecting your AI systems? Follow this blog for future breakdowns on AI security threats, AI supply chain risks, and how to integrate security for AI with your existing security stack.
Written by: Ervin Daniels
Cybersecurity Architect with over 25 years of Technology and Security leadership and hands-on experience across various industries (retail, public, financial services, and technology).
CYBERSECURITY Ervin Daniels
©2020 Ervin Daniels. Designed By Tru Brand Media Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of IBM.
Post comments (0)