Standards and Frameworks for Intelligent Systems

Governing the development and deployment of intelligent systems requires more than technical benchmarks — it demands structured frameworks that define risk categories, prescribe evaluation methods, and assign accountability. This page covers the primary standards and frameworks applied to intelligent systems in the United States, including the bodies that produce them, the mechanisms through which they operate, the scenarios in which they apply, and the classification boundaries that determine which framework governs a given system. The Intelligent Systems Standards and Frameworks landscape is foundational to responsible system design across every major sector.


Definition and scope

Standards and frameworks for intelligent systems are formal documents, guidelines, and structured methodologies produced by recognized bodies to specify how AI-enabled systems should be designed, evaluated, deployed, and monitored. They differ from regulations in that most are voluntary unless incorporated by reference into law or contract, though adoption increasingly functions as a de facto compliance baseline in federally regulated sectors.

The scope of applicable frameworks depends on four intersecting variables: the system's function (classification, prediction, generation, control), the sector of deployment (healthcare, finance, transportation, national security), the potential for harm to individuals or groups, and whether the system operates autonomously or under human supervision. The National Institute of Standards and Technology (NIST), the Institute of Electrical and Electronics Engineers (IEEE), the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC) are the four primary bodies producing reference-grade standards for intelligent systems at a global or national scale.

The regulatory landscape for intelligent systems in the US adds a compliance layer above voluntary frameworks: agency guidance from the Federal Trade Commission, the Food and Drug Administration, and the Department of Transportation, among others, frequently references or mandates alignment with named standards bodies as a condition of market authorization or enforcement safe harbor.


How it works

Most standards and frameworks for intelligent systems share a common structural architecture organized around a lifecycle model. The NIST AI Risk Management Framework (AI RMF 1.0, 2023) is the most widely cited national framework in the United States. It organizes AI risk management across four core functions:

  1. GOVERN — Establish organizational policies, roles, and accountability structures for AI risk.
  2. MAP — Identify and categorize the context, intended use, and potential risks of a specific AI system.
  3. MEASURE — Analyze and assess the risks identified in the MAP function using qualitative and quantitative methods.
  4. MANAGE — Prioritize and implement risk treatments, including mitigations, monitoring protocols, and fallback procedures.

Parallel to the AI RMF, ISO/IEC 42001:2023 establishes an AI Management System standard — the first certifiable management system standard specifically for AI. It aligns closely with the ISO 9001 quality management structure and requires documented risk assessment, defined roles, and continual improvement cycles. Organizations seeking certification must demonstrate traceable processes across the full system development lifecycle.

The IEEE has produced a complementary set of standards under its Ethically Aligned Design initiative. IEEE Std 7000-2021 addresses ethical concerns in system design, while IEEE Std 7010-2020 specifies a wellbeing impact assessment methodology for autonomous systems. These standards focus on value alignment and stakeholder impact rather than technical performance alone.

For safety-critical applications, the IEC 61508 series on functional safety defines Safety Integrity Levels (SIL 1 through SIL 4) that determine the rigor of verification and validation required for software embedded in safety-critical control systems. Autonomous systems operating in transportation, industrial control, or medical device contexts routinely reference IEC 61508 or its sector-specific derivatives such as ISO 26262 (automotive) and IEC 62304 (medical device software).

A key structural distinction exists between risk-based frameworks and capability-based standards:


Common scenarios

Healthcare AI: The FDA's Software as a Medical Device (SaMD) framework aligns with the International Medical Device Regulators Forum (IMDRF) guidance and references ISO 14971 for risk management. Intelligent systems in healthcare that perform diagnostic functions must satisfy both the FDA's predetermined change control plan requirements and ISO 62304's software lifecycle standards.

Financial services AI: The Office of the Comptroller of the Currency and the Federal Reserve have each issued guidance referencing model risk management principles codified in SR 11-7, a supervisory guidance document from the Federal Reserve that functions as the de facto standard for intelligent systems in finance used in credit underwriting, fraud detection, and market surveillance.

Autonomous vehicles: The National Highway Traffic Safety Administration references ISO 26262 (functional safety for road vehicles) and NIST's guidelines for autonomous systems in its voluntary guidance framework for automated driving systems.

Cybersecurity AI: NIST SP 800-53 Rev 5, the primary security control catalog for federal information systems, includes controls directly applicable to AI system integrity, auditability, and access restriction — making it a baseline standard for intelligent systems in cybersecurity operating within or alongside federal infrastructure.


Decision boundaries

Selecting the correct framework requires resolving four classification questions before implementation begins:

1. Is the system subject to mandatory or voluntary compliance?
Voluntary frameworks (NIST AI RMF, ISO 42001) become effectively mandatory when incorporated into federal contracts, agency enforcement guidance, or sector-specific regulations. For example, the FTC has indicated in enforcement actions that failure to follow documented risk management practices — consistent with NIST AI RMF structure — may constitute an unfair practice under Section 5 of the FTC Act.

2. Does the system operate in a high-risk domain?
The EU AI Act — while a European instrument — has extraterritorial relevance for US developers serving European markets. It defines 8 high-risk application categories including biometric identification, critical infrastructure management, and education assessment. Systems falling within these categories face conformity assessment obligations rather than self-declaration.

3. Does the system include components governed by functional safety standards?
If an intelligent system controls physical actuators, issues commands to machinery, or makes real-time decisions in safety-critical environments, IEC 61508 or a domain-specific derivative applies regardless of whether the system is also subject to a broader AI risk framework. The two layers operate in parallel, not as substitutes.

4. Does the system process personal data as part of its inference pipeline?
Privacy and data governance for intelligent systems introduces a third layer of frameworks — NIST Privacy Framework 1.0, ISO/IEC 27701:2019, and sector-specific requirements under HIPAA or the Gramm-Leach-Bliley Act — that interact with but do not replace AI-specific standards.

The intersection of these four questions determines which combination of frameworks applies. A core reference point for professionals navigating this landscape is available at the site index, which maps the full scope of intelligent systems topics including ethics and bias considerations, explainability requirements, and accountability frameworks that complement the standards described here.


📜 4 regulatory citations referenced  ·  🔍 Monitored by ANA Regulatory Watch  ·  View update log

References