Do we need AI governance if we're only running a single AI pilot?
If that pilot processes, stores, or transmits CUI, it falls within the CMMC assessment boundary under current requirements — the 110 NIST SP 800-171 Rev 2 controls apply regardless of whether the system is labeled 'pilot' or 'production.' The forthcoming Section 1513 framework will add AI-specific requirements on top of that existing baseline. Governance obligations are triggered by what the system touches, not by what the organization calls it.
What AI governance documentation should we start building now before the Section 1513 framework is finalized?
The highest-leverage starting point is architecting the system to tag every AI interaction with six metadata elements: source data lineage, model version, timestamp, user identity, confidence score, and human approval authority. This metadata structure satisfies current CMMC Level 2 audit requirements for systems processing CUI and anticipates the traceability standards that Section 1513 will formalize based on the direction established in the legislative text and the NIST AI Risk Management Framework.
Will the NDAA Section 1513 framework apply to all defense subcontractors, or only primes?
Section 1513 directs the DoW to incorporate AI security requirements into CMMC and DFARS — both of which utilize flow-down contract clauses that extend compliance obligations from primes to subcontractors. The scope of applicability will follow these established flow-down mechanisms, meaning any subcontractor handling CUI and deploying AI systems should anticipate inclusion in the framework's requirements.
How is AI governance different from regular CMMC compliance?
CMMC Level 2's 110 NIST SP 800-171 controls address information system security broadly — access management, incident response, configuration management, and related domains. AI governance adds requirements specific to AI system characteristics: model versioning and change tracking, inference-level traceability, data lineage through training and retrieval pipelines, human-in-the-lead validation for safety-critical or compliance-sensitive outputs, and continuous monitoring for model drift. Section 1513 will formalize these AI-specific governance layers on top of the existing CMMC baseline rather than replacing it.
Can we retrofit AI governance into systems that are already deployed?
Retrofitting is technically possible but substantially more expensive and disruptive than building governance from inception. Audit trail metadata capture must be embedded at the inference layer — the point where the AI system generates its outputs — to be reliable and tamper-resistant. Adding this capability to a production system requires modifying inference pipelines, revalidating data integrations, retraining operations staff on new workflows, and potentially re-architecting access controls. Building governance into the initial architecture typically represents a fraction of total deployment cost, while retrofitting can approach or exceed the original deployment investment depending on system complexity.