Intelligent Systems Glossary of Terms
This page compiles and defines the core terminology used across the field of intelligent systems, covering foundational concepts, architectural components, and operational classifications. Precise vocabulary matters in this domain because ambiguous terminology produces misaligned design decisions, regulatory misclassification, and accountability gaps. The definitions collected here draw on standards published by the National Institute of Standards and Technology (NIST), the IEEE, and other recognized public bodies. For a broader orientation to the field, the home resource index provides structured entry points across all major subject areas.
Definition and scope
Intelligent systems terminology spans three overlapping registers: technical (describing computational mechanisms), regulatory (defining liability and compliance categories), and operational (distinguishing deployment contexts). A glossary in this domain must respect all three because the same term can carry different meanings depending on which framework applies.
The definitions below are organized under four structural headings that correspond to the page's content sections. Each entry identifies the source framework where relevant. NIST AI 100-1 — Artificial Intelligence Risk Management Framework — serves as the primary definitional anchor for U.S.-facing usage (NIST AI RMF 1.0).
Artificial Intelligence (AI): NIST AI 100-1 defines AI as "a machine-based system that can, for a given set of objectives, make predictions, recommendations, or decisions influencing real or virtual environments" (NIST AI 100-1). This definition excludes purely rule-hardcoded systems and requires some form of learned or inferred output.
Intelligent System: A broader category than AI alone. IEEE Standard 2861-2023 frames intelligent systems as systems that perceive their environment, process information, and take actions to achieve goals, encompassing both narrow AI applications and hybrid human-machine architectures. See the types of intelligent systems page for classification by architecture.
Machine Learning (ML): A subset of AI in which a system improves performance on a task through exposure to data without being explicitly programmed for each case. ML is the enabling mechanism behind most modern machine learning in intelligent systems deployments.
Training Data: The labeled or unlabeled dataset used to fit model parameters. Data quality directly governs output reliability; see data requirements for intelligent systems for scope and governance considerations.
Model: A mathematical function, often a neural network or decision tree, that maps inputs to outputs after training. Model behavior is the primary object of evaluation in training and validation of intelligent systems.
Inference: The operational phase in which a trained model processes new inputs to produce predictions or decisions — distinct from the training phase. Inference latency is a primary performance constraint in real-time deployments.
Explainability: The degree to which a system's outputs can be traced to identifiable inputs and mechanisms in terms understandable to a human stakeholder. NIST AI 100-1 treats explainability as a component of trustworthy AI. See explainability and transparency in intelligent systems.
How it works
Glossary terms in intelligent systems are not static; they inherit meaning from the standards lifecycle and from regulatory interpretation. The following terms describe the functional architecture through which intelligent systems operate.
- Perception layer: Sensors, cameras, microphones, or data feeds that convert real-world signals into digital representations. Relevant to computer vision and intelligent systems and natural language processing in intelligent systems.
- Knowledge representation: The encoding of domain facts and relationships in a structure a system can query and reason over. Formal ontologies, semantic networks, and knowledge graphs are the primary formats. See knowledge representation and reasoning.
- Inference engine: The computational component that applies logical rules or probabilistic models to a knowledge base to derive conclusions. Central to expert systems and rule-based AI.
- Learning algorithm: The procedure by which model parameters are updated — gradient descent for neural networks, Bayesian updating for probabilistic models, evolutionary search for optimization problems.
- Decision module: The component that converts model outputs into actions or recommendations. In autonomous systems and decision-making, this module interacts directly with actuators or external APIs.
- Feedback loop: The mechanism by which system outputs influence future inputs, either through retraining pipelines or real-time adaptive control.
Narrow AI vs. General AI: Narrow AI — also called weak AI — operates within a specific task domain (image classification, speech recognition, route optimization). General AI, sometimes called artificial general intelligence (AGI), denotes a system capable of transferring learned competencies across arbitrary domains. No deployed system as of this writing meets the technical threshold for AGI. The research frontiers in intelligent systems page covers active work toward more generalized architectures.
Common scenarios
The following terms appear most frequently in sector-specific deployments. Understanding their precise definitions prevents misapplication across domains.
Autonomous System: A system that executes sequences of decisions without continuous human direction. The U.S. Department of Defense Directive 3000.09 establishes a 3-level autonomy taxonomy — human-in-the-loop, human-on-the-loop, and fully autonomous — that has been adopted widely outside defense contexts.
Edge AI: Inference executed on local hardware (microcontrollers, embedded GPUs) rather than remote servers. Edge deployment reduces latency and limits data egress but constrains model size. Relevant to intelligent systems in manufacturing and intelligent systems in transportation.
Federated Learning: A distributed training approach in which model updates are computed locally on client devices and aggregated centrally without raw data leaving the device. Relevant to privacy and data governance for intelligent systems.
Bias (Algorithmic): Systematic and repeatable error in model outputs attributable to flawed assumptions in the training process, non-representative training data, or feedback loops that amplify historical inequities. NIST AI 100-1 identifies bias as a primary AI risk category. See ethics and bias in intelligent systems.
Hallucination: A failure mode of generative models in which the system produces outputs that are factually incorrect but statistically plausible given the training distribution. Hallucination rates vary by model architecture and task domain. See intelligent systems failure modes and mitigation.
Digital Twin: A real-time virtual model of a physical asset or process, updated continuously by sensor feeds. Used in intelligent systems in energy and utilities and advanced manufacturing contexts.
Decision boundaries
Classification boundaries determine which regulatory frameworks, safety standards, and accountability structures apply. Three axis dimensions govern most classification decisions in intelligent systems.
Risk-based classification: The EU AI Act — a legislative framework that has influenced U.S. regulatory discussions — organizes AI systems into four risk tiers: unacceptable risk (prohibited), high risk (regulated), limited risk (transparency obligations), and minimal risk (unregulated). The regulatory landscape for intelligent systems in the U.S. page maps how U.S. sector agencies apply analogous distinctions.
Human oversight level: Systems are classified by the degree of human control maintained during operation. The three-level DoD taxonomy (human-in-the-loop, human-on-the-loop, fully autonomous) provides a stable cross-sector reference. Accountability frameworks for intelligent systems maps how oversight level affects liability assignment.
Domain-specific safety standards: Systems deployed in safety-critical verticals carry additional classification weight:
- Medical devices: FDA's Software as a Medical Device (SaMD) framework under 21 CFR Part 820 applies to AI/ML-based diagnostic and therapeutic software, distinguishing by intended use and risk class.
- Automotive: SAE International's 6-level driving automation taxonomy (SAE J3016) classifies vehicles from Level 0 (no automation) to Level 5 (full automation), determining which safety requirements apply.
- General AI risk: NIST AI RMF 1.0 organizes risk across 4 functions — Govern, Map, Measure, Manage — providing a voluntary framework that federal agencies and contractors increasingly adopt (NIST AI RMF 1.0).
Supervised vs. Unsupervised vs. Reinforcement Learning: These three learning paradigms define how a model acquires knowledge and, correspondingly, what validation approaches apply. Supervised learning trains on labeled input-output pairs; unsupervised learning identifies structure in unlabeled data; reinforcement learning trains an agent through reward signals tied to environment interaction. Each paradigm carries distinct intelligent systems performance metrics and validation protocols.
For a structured comparison of intelligent systems against traditional rule-based software — including how these glossary terms shift in meaning across those contexts — see intelligent systems vs. traditional software. The full intelligent systems standards and frameworks page provides the complete standards landscape from which these definitions derive.