Skip to main content
Compliance

AI Security for Defense Contractors: CMMC and FedRAMP 2026

BT

BeyondScale Team

AI Security Team

13 min read

Defense contractors deploying AI systems now face a new regulatory reality. The FY2026 National Defense Authorization Act (NDAA), enacted December 2025, mandates a comprehensive AI security framework for DoD contractors through Section 1513. Combined with CMMC 2.0 Phase 2 beginning November 2026 and evolving FedRAMP AI requirements, AI security for defense contractors has moved from best practice to legal obligation.

This guide covers what Section 1513 requires, how the AI framework integrates with CMMC and FedRAMP, the specific threat landscape targeting defense AI systems, and a practical 90-day compliance roadmap.

Key Takeaways

    • NDAA FY2026 Section 1513 requires DoD to develop an AI security framework covering supply chain risks, adversarial tampering, and continuous monitoring; the framework will be incorporated into DFARS and CMMC
    • Section 1532 already prohibits defense contractors from using AI developed by DeepSeek, High Flyer, or companies subject to foreign ownership from China, Russia, North Korea, or Iran
    • CMMC 2.0 Phase 2 starts November 10, 2026; only 1% of the defense industrial base is currently certified
    • Documented AI attacks against defense include prompt injection targeting GenAI.mil's web-grounding capability and nation-state data poisoning campaigns against AI training datasets
    • NIST AI RMF, DoD Zero Trust Strategy 2.0, and IEC 62443 provide the complementary frameworks for building NDAA-aligned AI security controls
    • False Claims Act liability now attaches to misrepresentation of AI security compliance under the new framework

What NDAA FY2026 Section 1513 Actually Requires

Section 1513 directs the Secretary of Defense to develop a risk-based framework for the cybersecurity and physical security of AI and ML technologies acquired by or for the Pentagon. The scope is broad: "covered AI/ML" encompasses source code, model weights, methods and algorithms, training data, and software used to develop the system. Any entity entering into a contract with DoD for the development, deployment, storage, or hosting of covered AI/ML qualifies as a covered entity. That includes subcontractors.

The framework must address four principal risk categories:

Supply chain risks: Data poisoning attacks that contaminate training datasets to cause misclassification or embed backdoors, adversarial tampering with hardware, software, data, or processes, and unintentional data exposure through misconfiguration.

Adversarial threats: Model jailbreaks, adversarial prompt injection attacks, and unauthorized manipulation of model behavior at inference time.

Workforce and insider threats: Personnel with legitimate access to AI systems misusing that access, either deliberately or through compromise.

Security monitoring: Continuous monitoring procedures for AI systems, incident reporting requirements, and evaluation of commercial platforms for automated monitoring.

On the physical security side, the framework will require secure facilities, environmental controls, video surveillance, and secure transportation procedures for AI hardware and model weights.

DoD must deliver a status report to Congress by June 16, 2026, covering implementation timelines, milestones, funding requirements, and effectiveness metrics. The framework itself is then incorporated into DFARS, binding all covered contractors through their contract terms.

Beyond Section 1513, Section 1532 is already in effect. It prohibits defense contractors from using AI developed by DeepSeek, High Flyer (DeepSeek's parent), or any company with a covered nation (China, Russia, North Korea, Iran) having a direct or indirect ownership stake of 20% or more. Limited waivers exist for scientifically valid research and counterterrorism testing, but the default prohibition applies to both prime contractors and their subcontractors.

The CMMC + AI Intersection

CMMC 2.0 Phase 2 begins November 10, 2026. Starting that date, contracting officers will require certified third-party Level 2 assessments by a C3PAO (Certified Third-Party Assessor Organization) as the default for contracts involving CUI. Self-assessments are no longer sufficient for most programs. The capacity gap is severe: as of early 2026, approximately 1,042 organizations out of the 76,598 requiring certification have completed it. Analyst estimates put wait times for new C3PAO clients at over 18 months by mid-2026.

Section 1513 explicitly describes the AI security framework as "an extension or augmentation" of CMMC. Legal analysts characterize the combination as "CMMC for AI." Practically, this means AI-specific security controls will layer on top of existing NIST 800-171 controls already embedded in CMMC Level 2. The AI additions will cover:

  • Input validation controls: Technical measures preventing adversarial inputs from manipulating model behavior (anti-prompt-injection)
  • Access controls with MFA and RBAC: For AI model endpoints, training infrastructure, and inference APIs
  • Output monitoring and logging: AI-generated output captured in audit logs with SIEM integration for anomaly detection
  • Adversarial attack prevention: Continuous red teaming, input filtering, and model robustness testing as ongoing processes rather than one-time audits
For contractors not yet certified under CMMC, the November 2026 deadline for Level 2 and the incoming AI controls represent a compound compliance obligation that requires immediate action. A contractor waiting on CMMC certification that also needs AI security controls has two sequential deliverables with overlapping preparation requirements.

FedRAMP AI Requirements and the 20x Initiative

Any cloud-based AI service provided to federal agencies, including DoD components, requires FedRAMP authorization. FedRAMP is actively prioritizing AI authorizations through a defined prioritization process.

To qualify for prioritized review, AI cloud services must demonstrate:

  • Enterprise-grade features: single sign-on, SCIM provisioning, role-based access control
  • Data separation guarantees: model training on customer data must not leave the customer environment without explicit authorization
  • Real-time analytics and audit logging
The FedRAMP 20x initiative introduces a cloud-native authorization model using automated "Key Security Indicators" that continuously generate compliance evidence rather than requiring periodic audits. Phase 3, covering Moderate-impact systems, opens for wide-scale adoption in FY26 Q3-Q4 (July through September 2026). Moderate-impact requirements in 20x expand automated evidence for configuration management, vulnerability detection and remediation, and identity assurance.

For defense contractors that both host AI services and consume them, the FedRAMP-CMMC intersection is increasingly convergent. Both frameworks emphasize continuous monitoring, evidence automation, and security process maturity. Infrastructure achieving FedRAMP authorization provides documented control evidence that overlaps with CMMC requirements, reducing the total compliance burden when addressed in an integrated program.

The Threat Landscape Targeting Defense AI Systems

The threat is not hypothetical. Documented incidents and research provide a clear picture of what adversaries target and how.

Prompt injection against enterprise AI deployments: GenAI.mil, the DoD's enterprise AI platform built on Google Cloud's Gemini for Government, reached approximately 3 million military personnel, civilian employees, and contractors by February 2026. Security researchers specifically flagged indirect prompt injection as a primary attack vector: a poisoned document in a shared repository, a manipulated webpage retrieved through the platform's web-grounding capability, or a crafted email forwarded for AI-assisted analysis could all inject instructions into the model's context. The UK's National Cyber Security Centre characterized prompt injection as potentially "never fully mitigable in the way SQL injection was."

Data poisoning of training pipelines: The Lieber Institute at West Point documented data poisoning as a "covert weapon" against military AI superiority. Specific techniques include label flipping (altering dataset labels to cause systematic misclassification) and backdoor attacks (embedding triggers that cause targeted malfunctions when specific inputs appear at inference time). Nation-state groups attributed to Chinese military intelligence have targeted AI assistants used by Western defense contractors to extract technical specifications and research data.

Model inversion and data extraction: EchoLeak (June 2025) demonstrated a vulnerability in Microsoft 365 Copilot, widely deployed across the defense industrial base, that extracted sensitive data without any user interaction by manipulating how the model processed user data. The exploit bypassed normal user behavior gates entirely.

AI-assisted attacks against AI defenders: 72% of AI-assisted cyberattacks increased since 2024. Adversaries using AI to probe and manipulate other AI systems is the emerging dual-use threat pattern that the NDAA framework is designed to address.

Building NDAA-Aligned AI Security Controls

The four control categories in Section 1513 map directly to implementable technical controls. Here is the architecture for each.

Input Validation and Prompt Injection Prevention

Defense AI systems require layered input controls at every ingestion point. For document-processing AI (the GenAI.mil attack surface), this means:

  • Content scanning pipelines that flag instruction-like patterns in document uploads before they reach model context
  • Prompt classification layers that identify injected instructions before model inference
  • Trust-level separation: user-provided content processed at lower trust levels than system instructions, with structural delimiters that resist manipulation
The OWASP LLM Top 10 defines prompt injection as LLM01 and provides a control taxonomy that maps to NDAA input validation requirements. For defense deployments, these controls should be documented as CMMC AC (Access Control) and SC (System and Communications Protection) implementations.

Access Controls for AI Systems

NDAA Section 1513 references MFA and RBAC for AI system access. In practice, this means:

  • Model endpoints protected with short-lived tokens, not persistent API keys
  • RBAC definitions that scope access to specific model capabilities (a logistics officer's AI assistant should not have the same access grants as an intelligence analyst's tool)
  • Privileged access workstations for AI training infrastructure that modifies model weights
  • Audit logging on all model access, including inference API calls, not just administrative access
For zero trust alignment, non-human identity security controls apply to AI service accounts accessing classified data. Each AI system should have a defined identity with documented permissions rather than shared credentials.

Output Monitoring and SIEM Integration

AI output monitoring is the control category most often absent from defense contractor AI deployments. The requirement has two components.

First, AI-generated outputs must be logged with sufficient context to reconstruct what input produced what output. This means logging input metadata (source, classification level, user identity) alongside output classification, not the full text of classified inputs.

Second, logs must integrate into SIEM workflows with detection rules for anomalous AI behavior. Useful anomaly signals include:

  • Sudden increase in refusals (indicating adversarial probe attempts)
  • Output length outliers (unusually long outputs may indicate data extraction)
  • Cross-classification data references in outputs from unclassified AI tools
  • API call volume spikes outside normal operational windows

Adversarial Attack Prevention

Section 1513 treats adversarial attack prevention as an ongoing operational control, not a one-time assessment. The NIST AI RMF MANAGE function covers post-deployment monitoring, appeal and override mechanisms, and decommissioning procedures. For defense contractors, this translates to:

  • Quarterly adversarial testing of production AI systems using red team methodologies (see AI red teaming guide)
  • Documented procedures for AI system quarantine when compromise is suspected
  • Model version control with the ability to roll back to a known-good checkpoint
  • Human-in-the-loop gates for AI outputs affecting mission-critical decisions

NIST AI RMF Alignment

Section 1513 explicitly references NIST SP 800-series requirements as the technical baseline. NIST AI 100-1, the AI Risk Management Framework, provides the governance layer.

The four functions map to Section 1513 requirements:

  • GOVERN (1.1): Understand and document legal and regulatory requirements for AI in your operating environment. For defense contractors, this means NDAA Section 1513/1532, DFARS clauses as they evolve, and CMMC Level 2 AI controls.
  • MAP (5.1): Document the likelihood and magnitude of impacts from your AI systems. For CUI-processing AI, this includes data exfiltration via prompt injection, operational disruption via data poisoning, and compliance exposure via prohibited AI (Section 1532).
  • MEASURE (2.11): Evaluate AI systems for reliability under adversarial conditions. Red team exercises documented against MITRE ATLAS tactics (see MITRE ATLAS framework guide) satisfy this function.
  • MANAGE (4.1): Maintain post-deployment monitoring and change management. This is the continuous monitoring requirement that maps to CMMC CA (Security Assessment) and IR (Incident Response) control families.
The April 2026 NIST concept note for an AI RMF Profile on Trustworthy AI in Critical Infrastructure directly targets defense contractors operating in critical infrastructure sectors, providing a sector-specific implementation path that will be referenced in procurement when finalized.

The False Claims Act Risk

Freshfields legal analysts flagged a compliance dimension that security teams often miss: misrepresentation of AI security compliance under NDAA requirements creates False Claims Act (FCA) exposure. If a defense contractor certifies DFARS compliance while knowingly using prohibited AI (Section 1532 violations), or misrepresents the security posture of covered AI systems in contract certifications, that misrepresentation triggers FCA liability. FCA penalties include treble damages and per-claim fines, and qui tam provisions allow private parties to bring claims on the government's behalf.

This makes AI security documentation a legal instrument, not just an operational one. Evidence of NDAA compliance controls needs to be accurate, current, and auditable. Aspirational compliance documentation that describes planned controls as implemented controls is FCA exposure.

90-Day Compliance Roadmap for Defense Contractors

Given the CMMC Phase 2 deadline of November 2026 and the June 2026 Congressional status update requiring DoD to publish its AI security framework implementation timeline, the window for preparation is short.

Days 1-30: Inventory and risk assessment

  • Complete an inventory of all AI systems in scope: which systems process, store, or transmit CUI; which use AI in decision-making; which integrate AI services from third parties
  • Apply the Section 1532 prohibited AI check to all AI vendors and cloud services in use (Chinese-origin or foreign-controlled AI is an immediate compliance risk)
  • Map each AI system to the four Section 1513 control categories to identify gaps
  • Identify C3PAO for CMMC Level 2 assessment and secure a slot given 18-month wait times
Days 31-60: Priority control implementation
  • Deploy input validation and prompt injection controls on AI systems processing CUI or connecting to external data sources
  • Implement MFA and RBAC on all AI model endpoints and training infrastructure
  • Integrate AI system logs into SIEM and configure baseline anomaly detection rules
  • Document AI system access controls and output monitoring procedures in your System Security Plan (SSP)
Days 61-90: Red team validation and documentation
  • Conduct adversarial testing of AI systems against NDAA-relevant threat scenarios (prompt injection, data extraction, model evasion)
  • Document test results and remediation actions in AI-specific security assessment records
  • Align AI security documentation with NIST AI RMF GOVERN, MAP, MEASURE, and MANAGE functions
  • Brief legal and contracts teams on False Claims Act implications and ensure contract certifications are accurate
A formal AI security assessment from a qualified third party provides defensible documentation of your compliance posture and identifies gaps before a DCSA or C3PAO assessment surfaces them.

Conclusion

NDAA FY2026 Section 1513 represents the federal government's first comprehensive mandate for AI security controls in the defense industrial base. The framework is still being developed, with DoD's implementation plan due to Congress by June 16, 2026, but the direction is clear: AI systems that process DoD data will face the same security scrutiny as traditional IT systems, with input validation, access controls, output monitoring, and adversarial testing as mandatory controls.

Defense contractors that start now have a genuine first-mover advantage. CMMC certification backlogs are severe, and the AI framework adds new control requirements on top of existing CMMC obligations. Contractors that treat AI security as a parallel track to CMMC preparation, rather than a separate problem, will be positioned for compliance when the framework is finalized.

BeyondScale conducts AI security assessments aligned to NIST AI RMF, NDAA Section 1513 requirements, and CMMC control families. Book an AI security assessment to establish your compliance baseline before the Congressional deadline, or run a Securetom scan to identify exposed AI endpoints in your current environment.

AI Security Audit Checklist

A 30-point checklist covering LLM vulnerabilities, model supply chain risks, data pipeline security, and compliance gaps. Used by our team during actual client engagements.

We will send it to your inbox. No spam.

Share this article:
Compliance
BT

BeyondScale Team

AI Security Team, BeyondScale Technologies

Security researcher and engineer at BeyondScale Technologies, an ISO 27001 certified AI cybersecurity firm.

Want to know your AI security posture? Run a free Securetom scan in 60 seconds.

Start Free Scan

Ready to Secure Your AI Systems?

Get a comprehensive security assessment of your AI infrastructure.

Book a Meeting