KEY TAKEAWAYS
Cybersecurity Maturity Model Certification (CMMC) 2.0 was finalized without a single direct reference to AI. For defense contractors and their supply chain vendors racing to adopt AI tools across engineering, proposal writing, customer service, and back-office operations, that silence can feel like permission. It isn't. Every existing CMMC control — access control, media protection, incident response, system and communications protection — still applies the moment AI touches your data. Here are 10 risks CMMC-bound organizations should address now, before an assessor raises them first.
1. CMMC Doesn't Say "AI," and That's the First Trap
The absence of AI-specific language in CMMC 2.0 leads many organizations to assume AI tools are implicitly allowed. That's misleading. AI usage falls squarely under existing controls for access, media protection, incident response, and data handling. The lack of explicit guidance creates a false sense of safety, and assessors will not share that assumption.
2. AI Tools Quietly Break Data Boundary Assumptions
Many AI platforms process, cache, or retain data outside your defined system boundary, often in ways teams don't fully understand. That creates hidden violations of Controlled Unclassified Information (CUI) handling requirements and system boundary definitions, even when users believe they're operating "internally."
3. Prompt Data Is Still Data
Employees often don't think of AI prompts as data transfers. In reality, a prompt may include CUI, export-controlled information, or operational details, all of which are subject to the same handling rules as any other outbound data. Without explicit policy, training, and logging, AI use becomes an unmonitored exfiltration path.
4. AI Expands the Insider Risk Surface
CMMC focuses heavily on access and accountability, but AI tools multiply the impact of any single user's access. Before, a malicious insider had to open files that created logs and alerts, then actually read through the content to find what they were after. Today, that same insider can simply prompt, "Hey, this is what I'm looking for. Help me find files and data that might have it." That's a stronger, more efficient insider operating at a scale the original controls never contemplated.
5. Logging and Traceability Break Down Fast
Many AI tools lack CMMC-grade logging, retention, and attribution. When an assessor asks, "Who accessed what CUI, when, and how?" AI workflows often can't answer cleanly, creating evidence gaps on top of control gaps.
6. AI Supply Chain Risk Is Largely Invisible
CMMC emphasizes supply chain risk, yet AI vendors frequently rely on nested models, third-party APIs, and offshore processing. Organizations commonly vet the front-end tool but never assess the full AI dependency stack, turning the supply chain requirement into one of the easiest findings for an assessor to write up.
There's an emerging wrinkle here, too: Agent-to-agent infrastructure like Model Context Protocol (MCP) servers — where AI systems talk to other AI systems through API-like connections — is already reshaping what the supply chain means. If you don't know what your vendor's AI is talking to, you don't know what's touching your data, and we cannot outsource that responsibility. It's on us to understand the risks.
7. Training Data Contamination Is a Real Compliance Risk
Even when vendors claim they don't train on your data, usage patterns, telemetry, and fine-tuning workflows can still create data persistence. CMMC organizations need to understand how a vendor technically enforces non-retention, not just what the marketing page says.
8. Incident Response Plans Rarely Address AI-Specific Scenarios
Most incident response plans were written before AI adoption. Few address AI-generated data leakage, hallucinated outputs causing operational harm, or misuse of AI during an active incident. That leaves response teams flat-footed during both assessments and real events.
9. AI Creates Policy Debt Faster Than Technical Debt
Organizations often deploy AI before updating acceptable use, data classification, vendor management, and employee training policies. This debt keeps being created: Technology accelerates faster and faster, and with AI moving day-to-day in ways that feel genuinely new, the gap between how the business operates and what the policy says keeps growing. It's one of the first things assessors notice.
10. The Real Risk: Treating AI as a Tool Instead of a Capability
The biggest mistake CMMC-bound organizations make is treating AI like a new app. AI isn't simply software. It has become a thought partner, transforming every job function in our organizations right before our eyes. It redesigns business processes. It shifts how data moves from point A to point B. It changes risk models. Without governance, guardrails, and executive ownership, that kind of capability quietly erodes compliance posture over time.
The Bottom Line
CMMC doesn't prohibit AI, but it absolutely punishes ambiguity. Organizations across the defense supply chain that treat AI casually today are creating assessment findings tomorrow. The organizations that get this right won't wait for the framework to catch up. Instead, they'll apply the controls they already have to the tools they're already using and document it before an assessor asks.
Close the AI Compliance Gap
SBS CyberSecurity helps organizations in highly regulated environments prepare for independent assessments, regulatory scrutiny, and evolving security expectations.
Read More
Complete risk assessments with industry-leading, predefined risk data, and instant board-ready reports.
Read More
Chad Knutson
Chad has been dedicated to educating industry professionals about cybersecurity for over 20 years. While consulting with financial institutions, he saw the need to empower employees to be better prepared to confidently handle cybersecurity threats, create and manage strong information security programs, and understand ever-changing regulations. This led Chad to be a driving force in the development of the SBS Institute, where he served as president for seven years.Chad maintains his CISSP, CISA, CRISC, and CDPSE certifications. He received his Bachelor of Science in Computer Information Systems and Master of Science in Information Assurance from Dakota State University, a Center of Academic Excellence in Information Assurance Education designated by the NSA.
Chad is incredibly passionate about cybersecurity training and education for everyone — directors, employees, and customers alike. He is an instructor for SBS Institute courses, webinar host, and frequently speaks on cybersecurity topics at a variety of events and trainings across the country, including trainings for state examiners.