AI Glossary

Understanding the Language & Jargon of Artificial Intelligence

A comprehensive collection of 285 terms relating to artificial intelligence and automation, including:

  • Core AI and machine learning concepts
  • Data science and statistics
  • Automation technologies and practices
  • Hardware and infrastructure
  • Ethics and philosophical considerations
  • Some pop culture and historical context
  • Industry terminology and best practices
  • Environmental and societal impact

Terms are cross-referenced where relevant, allowing you to explore related concepts and build a deeper understanding of the field. If you have any suggestions for additions or corrections, please let me know.

Section A.containing23 terms

Accelerator
Specialized computer hardware, like GPUs or TPUs, designed to significantly speed up the heavy calculations needed for AI.
Agent
An AI program that can operate on its own to achieve specific goals. It observes its surroundings, makes its own decisions, and takes action without needing constant human direction.
Agentic AI
AI systems that are designed to be independent and proactive. They can make decisions and adapt their behavior in response to changing situations, much like a human agent.
AGI Alignment
The crucial process of making sure that artificial general intelligence systems will act in ways that are helpful and safe for humans, aligning with our values.
AI Ethics Officer
A role dedicated to establishing and upholding ethical principles and guidelines for the development and deployment of AI within an organization. The AI Ethics Officer is responsible for promoting responsible AI practices, addressing ethical risks and concerns, and ensuring AI systems are aligned with human values and societal well-being.
AI Governance
The framework of rules, policies, and processes designed to ensure the responsible and ethical development and deployment of AI technologies.
AI Literacy
A foundational understanding of AI concepts, capabilities, and limitations, enabling individuals and organizations to effectively engage with AI.
AI-Driven Automation
Automation that is enhanced by artificial intelligence. This means the automated systems can learn, adapt, and make smarter decisions as they operate, improving over time.
Algorithm
A precise set of instructions that a computer follows to solve a problem or complete a task. Think of it like a recipe, but for computers.
Applied AI
The practical application of artificial intelligence technologies and techniques to solve specific, real-world problems across various industries and domains. Applied AI focuses on creating tangible AI-powered solutions, products, and systems that address concrete needs and deliver practical value, as opposed to purely theoretical or research-focused AI.
Artificial General Intelligence (AGI)
A theoretical type of AI that would have the same broad thinking and learning abilities as a human being. It could perform any intellectual task that a human can. Artificial superintelligence is a more specific term for AI that is even more powerful than AGI.
Artificial Intelligence (AI)
The field of computer science focused on creating machines that can perform tasks that typically require human intelligence. This includes learning, problem-solving, and decision-making.
Artificial Process Automation (APA)
Using AI tools and techniques to make traditional process automation systems more intelligent and adaptable. This allows for handling more complex and variable tasks.
Artificial Superintelligence (ASI)
A hypothetical form of AI that surpasses human intelligence in all aspects, including creativity, problem-solving, and general wisdom. ASI is often considered the ultimate frontier in AI development, but also raises significant safety concerns.
ASIC (Application-Specific Integrated Circuit)
Custom-designed computer chips that are built specifically to be very efficient at certain AI tasks. They offer better performance and use less energy for those specific tasks.
Attention Mechanism
A technique used in neural networks that allows the AI to focus on the most important parts of the information it is processing, especially useful in understanding language and images.
Autoencoder
A type of neural network that learns to compress data into a smaller, more efficient form, and then can also reconstruct the original data from this compressed version.
Automated Decision-Making (ADM)
Systems that use predefined rules or AI models to make choices or judgments without needing a person to step in for each decision.
Automated Self-Improvement Methods
Techniques that allow AI systems to learn from their own performance and automatically get better over time, often by using feedback loops and machine learning.
Automated Workflow
A series of tasks that are carried out automatically by software. This helps to streamline processes by reducing manual steps.
Automation
Using technology to perform tasks automatically, with as little human involvement as possible. This is done to increase speed, efficiency, and accuracy in processes.
Autonomous Agent
An AI agent that can operate and make decisions independently. It can perform tasks and interact with its environment without needing direct instructions at every step.
Autonomous System
A system that can function and make decisions on its own, without human control once it is activated. It can manage itself and adapt to different situations.

Section B.containing12 terms

Backpropagation
A key algorithm used to train neural networks. It works by calculating errors and then adjusting the network's internal settings (weights) to reduce those errors and improve accuracy.
Backward Pass
The step in training a neural network where errors are calculated and then propagated back through the network to update the model's parameters (weights), aiming to reduce these errors in future predictions. This is often done using the Backpropagation algorithm.
Base Model
A general-purpose AI model that has been trained on a large amount of data. It serves as a starting point that can be further customized or fine-tuned for specific tasks.
Batch Processing
Handling data or performing AI calculations in groups (batches) rather than one item at a time. This is often more efficient for computers, especially when dealing with large datasets.
Bayesian Networks
Visual models that represent probabilities and relationships between different things. They are used to reason about uncertain events and make predictions based on probabilities.
Bias (AI)
In the context of AI, bias refers to systematic and unfair preferences or discrimination within AI systems, often leading to skewed outcomes that unfairly favor or disadvantage certain groups. This type of bias can arise from biased training data, flawed algorithm design, or societal biases reflected in the AI system. Separately, in neural networks, "bias" also (and confusingly, using the same term) refers to a constant input added to each neuron, independent of the current input data, which helps the neuron learn more complex patterns.
Big Data
Extremely large and complex collections of data that are difficult to process and analyze with traditional methods. Analyzing big data can reveal valuable patterns and insights.
Black Box
An AI model, especially a deep learning model, whose inner workings are so complex that it's difficult to understand how it arrives at its decisions or predictions. It is opaque to human understanding.
Bot
A software program designed to automatically perform certain tasks. Bots are often used for repetitive jobs online, like customer service chatbots or gathering data from websites.
Brand Voice Integration
Making sure that AI-generated content matches a company's specific style and tone of communication. This helps maintain a consistent brand identity in all communications.
Business Process Automation (BPA)
Using technology to automate complex, end-to-end business operations. BPA aims to make business processes more efficient, reduce errors, and lower operational costs.
Business Rules Engine (BRE)
A software system that is used to automate decision-making based on a set of predefined business rules. It allows businesses to easily manage and change these rules as needed.

Section C.containing28 terms

Carbon Footprint
The total amount of greenhouse gases produced by AI activities, like training large models and running data centers. This includes emissions from electricity use and hardware production.
Categorical Data
Data that falls into distinct categories or groups, rather than being numerical. For example, colors or types of products are categorical data. AI needs to handle this type of data differently than numbers.
Chain-of-Thought
A way of prompting LLMs to solve complex problems by explicitly asking them to break down their reasoning into a series of logical steps before giving the final answer.
Chatbot
A software application or AI program designed to simulate conversation with human users, typically over text or voice interfaces. Chatbots are often used for customer service, information retrieval, or automated assistance, and can range from simple rule-based bots to sophisticated AI-powered conversational AI models.
Chatbot Automation
Using AI-powered chatbots to automatically handle conversations with users, often for customer service or information inquiries. This can reduce the need for human agents for routine interactions.
ChatGPT
A popular conversational AI model created by OpenAI. It is known for its ability to generate human-like text in response to a wide range of prompts and questions.
Chief AI Officer (CAIO)
A high-level executive responsible for leading an organization's overall AI strategy, development, and implementation. The CAIO drives AI innovation, ensures alignment with business goals, and oversees the ethical and responsible use of AI technologies across the enterprise.
Chief Automation Officer (CAO)
A high-level executive who leads a company's automation and AI strategy. They are responsible for driving digital transformation across the organization and ensuring these technologies align with business goals. Learn more.
Classification
A type of machine learning task where the goal is to assign input data to specific categories or classes. For example, classifying emails as spam or not spam.
Cloud Computing
Using remote servers and networks over the internet to access computing resources. This is often used for AI training and deployment because it offers scalable and flexible computing power.
Cluster Computing
Using a group of connected computers working together as if they were a single system. This is often done to handle very large AI workloads that require a lot of processing power, like training complex models.
Clustering
A machine learning technique for grouping similar data points together without knowing in advance what the groups should be. It is used to find natural groupings in data.
Cognitive Automation
Combining AI with automation to handle tasks that require some level of human-like thinking, reasoning, and learning. This goes beyond simple rule-based automation.
Compliance Measures
Rules and standards that ensure AI systems are used in ways that follow laws and ethical guidelines. This includes protecting data privacy and ensuring fair and responsible use of AI.
Compute
The amount of processing power needed to perform AI calculations, especially for training and running models. It's a measure of how much computational work is required.
Compute Resources
The hardware and infrastructure, such as processors, memory, and servers, that are necessary to train and operate AI models. These are the physical tools for AI computing.
Computer Vision
A field of AI that enables computers to "see" and understand images and videos. It involves tasks like image recognition, object detection, and image analysis.
Conditional Automation
An automation system that only activates or performs tasks when specific conditions or criteria are met. It is event-driven and reactive to certain triggers.
Confabulation
The generation of fabricated, distorted, or nonsensical information presented as if it were factual and true. Similar to hallucination, but the term confabulation, borrowed from psychology, emphasizes the AI's construction of plausible-sounding but ultimately invented content, rather than a simple misperception of reality.
Consciousness
The state of being aware of oneself and the world, having thoughts and feelings. In AI, it refers to the debated possibility of machines achieving a similar state of awareness.
Context Window
The amount of text that a LLM can consider at one time when processing or generating text. A larger context window allows the model to understand and generate longer, more coherent pieces of text.
Continuous Improvement
An ongoing process of refining and enhancing AI systems. This involves regularly updating models, incorporating new data, and adapting to technological advancements to maintain and improve performance.
Conversation Designer
A specialized role focused on designing and optimizing the user experience of conversational AI systems like chatbots and virtual assistants. Conversation Designers craft natural, engaging, and effective dialogues, considering user needs, conversation flow, and overall user satisfaction with AI interactions.
Conversational AI
A branch of AI focused on developing systems that can engage in natural, human-like conversations. Conversational AI draws heavily on Natural Language Processing (NLP) and machine learning to enable computers to understand and generate human language in interactive dialogues, powering applications like chatbots and virtual assistants.
Convolutional Neural Network (CNN)
A type of neural network particularly effective for processing visual data like images. CNNs are designed to automatically learn patterns from images.
Correlation
A statistical measure that indicates how strongly two things are related or tend to change together. In AI, it can be used to understand relationships between different data features.
Cross-Validation
A technique to evaluate how well a machine learning model will perform on new, unseen data. It involves splitting the data into multiple parts, training on some, and testing on others to get a reliable performance estimate.
Cyberdyne Systems
A fictional company from the "Terminator" movies, notorious for creating Skynet, a dangerous AI. It is often mentioned in discussions about the potential risks of uncontrolled AI development.

Section D.containing26 terms

Data (Star Trek)
An android character in "Star Trek: The Next Generation" who is on a quest to understand human emotions and consciousness. Data is a popular reference point when discussing AI personhood and sentience.
Data Augmentation
Techniques used to increase the amount and variety of training data for AI models by creating modified versions of existing data. This can improve model performance and generalization.
Data Automation
The process of using technology to automatically handle data-related tasks, such as collecting, organizing, and processing data, reducing the need for manual work.
Data Center
Large facilities that house the computer systems, servers, and infrastructure needed for large-scale AI operations. They provide the physical space, power, and cooling for AI computing.
Data Cleaning
The process of identifying and correcting errors, inconsistencies, and inaccuracies in datasets. This is a critical step to ensure data quality and reliable AI model training.
Data Distribution
How data values are spread out across a dataset. Understanding data distribution is important for choosing appropriate AI models and interpreting their behavior.
Data Engineering
The field focused on building and maintaining the systems and infrastructure needed to collect, store, and access data at scale. It is essential for making data usable for AI and analysis.
Data Lake
A storage repository that holds a vast amount of raw data in its native format, including structured and unstructured data. It is used for big data analytics and AI development.
Data Lineage
Tracking the history and journey of data, including where it came from, how it has changed, and where it moves. This is important for data quality and governance.
Data Mining
The process of exploring large datasets to discover hidden patterns, relationships, and useful information. It is a key technique in data science and knowledge discovery.
Data Pipeline
A series of steps that data goes through from its source to its destination, often involving extraction, transformation, and loading (ETL). Data pipelines are crucial for efficient data processing in AI.
Data Privacy
The principles and practices for ensuring that personal data is handled responsibly and in accordance with privacy regulations and user expectations.
Data Quality
The overall fitness of data for its intended uses. High-quality data is accurate, complete, consistent, and reliable, which is essential for effective AI.
Data Science
An interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data. It combines statistics, computer science, and domain expertise.
Data Warehouse
A data management system designed for analytical purposes, serving as a central repository for structured data from various sources across an organization. Following Bill Inmon's original definition, a data warehouse is subject-oriented, integrated, time-variant, and non-volatile, specifically built to support management's decision-making and business intelligence activities.
Decision Automation
Using AI and rule-based systems to automate complex business decisions, sometimes without human intervention. This can speed up processes and improve consistency in decision-making.
Decision Tree
A type of machine learning model that makes decisions by following a tree-like structure of rules. Each branch represents a decision based on a data feature, leading to a final outcome.
Deep Learning
A subfield of machine learning that uses neural networks with many layers (deep neural networks). Deep learning has been very successful in areas like image recognition and natural language processing.
Development Environment
The set of tools, software, and resources used for creating, testing, and debugging AI models and applications. This typically includes code editors, libraries, and testing frameworks.
Diffusion Model
A type of generative AI model that creates images by starting with random noise and gradually refining it into a coherent image. It is known for generating high-quality and detailed images.
Digital Transformation
The strategic integration of digital technologies, including AI, to fundamentally improve business processes, customer experiences, and create new value.
Digital Worker
A software program, often powered by AI, that can perform routine digital tasks, similar to a human office worker. It can automate tasks like data entry, processing documents, and responding to emails.
Distributed Training
Training an AI model across multiple machines or processors working in parallel. This is done to handle very large models or datasets that would be too much for a single machine to handle efficiently.
Document Generation
The automated creation of documents like reports, contracts, or emails using AI. This often involves NLP to generate text and rules-based systems for formatting and ensuring compliance.
Dynamic Variables
Elements within AI systems that can be changed or adjusted during operation. This allows for flexibility and adaptation in how the AI functions in response to different situations.
Dynamic Workflow Automation
Automated processes that can adapt and change in real-time based on incoming data and changing circumstances. This makes automation more flexible and responsive to dynamic environments.

Section E.containing14 terms

Edge AI
Running AI computations on local devices like smartphones or sensors, rather than sending data to the cloud. This reduces latency, saves bandwidth, and enhances privacy.
Edge Computing
A distributed computing model that processes data closer to where it is generated, at the "edge" of the network, reducing reliance on central cloud infrastructure.
Embeddings
Representing data, like words or images, as dense vectors of numbers. These vectors capture the semantic meaning and relationships of the data, making it easier for AI to process and understand.
Emergent Behavior
Complex or unexpected behaviors that arise in AI systems from the interaction of simpler rules or components. These behaviors are not explicitly programmed but emerge from the system's dynamics.
Energy Consumption
The amount of electrical energy used by AI systems, especially during training large models and running data centers. High energy consumption is a growing concern for AI.
Energy Efficiency
Efforts to design and operate AI systems in a way that minimizes the amount of energy they use, while still maintaining good performance. This is important for reducing costs and environmental impact.
Enterprise Automation
Implementing automation solutions across an entire organization, rather than just in isolated departments. This involves strategic, large-scale automation initiatives.
Ethics of AI
The branch of ethics that studies the moral issues raised by AI. It involves developing ethical principles and guidelines to ensure AI is developed and used responsibly and for the benefit of humanity.
ETL (Extract, Transform, Load)
A process in data management that involves extracting data from different sources, transforming it into a consistent format, and loading it into a target system like a data warehouse for analysis.
Evals
Systematic evaluations and tests used to measure the performance and capabilities of AI systems. Evals help to identify areas for improvement and ensure AI systems meet desired standards.
Ex Machina
A science fiction film that explores themes of AI consciousness, testing, and the ethical dilemmas of creating sentient machines. It raises questions about the nature of AI and humanity.
Exception Handling Automation
Automated systems that are designed to detect when errors or unexpected situations occur in automated processes, and then automatically take steps to resolve or manage these exceptions.
Explainable AI
AI systems that are designed to provide clear and understandable reasons for their decisions and actions. This is important for building trust and accountability in AI.
Exploratory Data Analysis (EDA)
The initial process of analyzing and visualizing datasets to understand their main characteristics, patterns, and anomalies. EDA is crucial for getting to know your data before building AI models.

Section F.containing10 terms

Fake AI
A term used to describe products or systems that are marketed as AI but actually rely on simple rules or even human labor behind the scenes. It is often used to exaggerate capabilities for marketing purposes.
Fauxtomation
Processes that are claimed to be automated but still require a significant amount of human intervention to function correctly. It is automation in name only, not in practice.
Feature
A specific, measurable piece of information about the data that is used as input for machine learning models. For example, in image recognition, features could be edges or colors.
Feature Engineering
The process of selecting, transforming, and creating features from raw data to make it more suitable for machine learning models. Good feature engineering can significantly improve model performance.
Feedback Loops
Processes built into AI systems that allow them to learn and improve over time. This includes user feedback, human review, and automated optimization mechanisms that refine the AI's performance.
Few-Shot Prompting
A technique for using LLMs where you provide only a small number of examples in the prompt to guide the model to perform a new task or generate a specific type of output.
Fine-tuning
The process of taking a pre-trained AI model, like a base model, and further training it on a more specific dataset to make it better at a particular task or domain. It customizes the model for a specialized use.
FLOPS (Floating Point Operations per Second)
A measure of a computer's performance, particularly for tasks involving numerical calculations. In AI, FLOPS are used to measure the speed and power of hardware used for training and running models.
Forward Pass
The process of feeding input data through a neural network to compute an output or prediction. It is a fundamental step in both training and using a neural network for inference.
Foundation Model
A very large AI model, typically trained on a massive amount of data, that can be adapted or fine-tuned for a wide range of downstream tasks. These models form a foundation for many AI applications.

Section G.containing8 terms

Generative Adversarial Networks (GANs)
A type of generative AI architecture that uses two neural networks – a generator and a discriminator – competing against each other to produce increasingly realistic synthetic data.
Generative AI
A type of AI that focuses on creating new content, such as text, images, music, or code. Generative models learn from existing data to generate new, similar data.
Generative Model Bias
Systematic and unfair skews present in the output of generative AI models, reflecting biases in training data or model design which can lead to skewed or discriminatory content.
GPT (Generative Pre-trained Transformer)
A family of powerful large language models developed by OpenAI. GPT models are based on the transformer architecture and are known for their natural language capabilities.
GPU (Graphics Processing Unit)
A type of processor originally designed for graphics processing but now widely used to accelerate AI computations, especially for training deep learning models due to their parallel processing capabilities.
Gradient Descent
An optimization algorithm used to train machine learning models, especially neural networks. It works by iteratively adjusting the model's parameters to minimize the error between predicted and actual outputs.
Graph Neural Networks (GNNs)
A type of neural network designed to work with data that is structured as graphs, like social networks or knowledge graphs. GNNs can learn from relationships and connections in graph data.
Green AI
An approach to developing and using AI that prioritizes environmental sustainability and energy efficiency. It focuses on reducing the environmental impact of AI training and deployment.

Section H.containing9 terms

HAL 9000
The sentient computer from the movie "2001: A Space Odyssey." HAL 9000 is a classic example in discussions about AI safety, control, and the potential for AI to deviate from human intentions.
Hallucination
An instance of AI-generated content not based on real data, presenting factually incorrect or nonsensical information as if it were true. While "hallucination" is the commonly used term in AI, confabulation would be a more technically accurate descriptor for this phenomenon.
Hardware Acceleration
Using specialized hardware components, such as GPUs and TPUs, to perform AI computations more quickly and efficiently than using general-purpose CPUs alone. This is crucial for handling demanding AI workloads.
Her
A film that explores the relationship between a human and an AI operating system. "Her" raises questions about consciousness, emotional intelligence in AI, and the future of human-AI relationships.
High-Performance Computing (HPC)
Using supercomputers and advanced computing systems to handle extremely complex and data-intensive tasks, including large-scale AI model training and simulations. HPC is essential for pushing the boundaries of AI.
Human Intelligence
The range of cognitive abilities that humans possess, including reasoning, learning, problem-solving, creativity, and emotional understanding. It serves as the benchmark and inspiration for developing AI.
Human-in-the-Loop
An approach to AI systems where humans are involved in the decision-making process, especially for critical or complex tasks. This can involve human oversight, validation, or direct intervention in AI operations.
Hyperautomation
A strategic approach to automate as many business processes as possible using a combination of RPA, AI, machine learning, and other advanced technologies. It aims for broad and deep automation across the enterprise.
Hypothesis Testing
A statistical method used to determine if there is enough evidence to support a particular hypothesis or claim about data. It is used in AI research to validate model performance and insights.

Section I.containing9 terms

I, Robot
A collection of stories by Isaac Asimov that introduced the Three Laws of Robotics. These stories explore ethical dilemmas and safety considerations related to AI and robotics, and have been influential in the field.
Imputation
A technique for handling missing data in datasets by replacing the missing values with estimated or substituted values. This is done to make the data complete and usable for AI models.
Inference
The process of using a trained AI model to make predictions or decisions on new, unseen data. It is the stage where the model is actually put to work to solve problems or generate outputs.
Inference Optimization
Techniques used to improve the speed and efficiency of the inference process, making AI models faster and less resource-intensive when making predictions. This is important for real-time applications.
Infrastructure as Code
Managing and provisioning computer infrastructure, including resources for AI, using code and machine-readable configuration files. This allows for automation and version control of infrastructure setup.
Intelligence Augmentation
Using AI technologies to enhance and extend human cognitive abilities, rather than replacing humans. It focuses on AI as a tool to make people smarter and more effective.
Intelligent Automation (IA)
A broader term for combining automation technologies with AI capabilities like machine learning and natural language processing. IA aims to automate more complex and intelligent tasks.
Intelligent Process Automation (IPA)
An advanced form of automation that integrates RPA with AI technologies like machine learning and NLP to automate complex, end-to-end business processes.
IoT Automation
Using the Internet of Things (IoT) devices and data to trigger and enable automation. This allows for smart and connected automation systems that respond to real-world data from IoT sensors.

Section J.containing2 terms

JARVIS
The AI assistant featured in the "Iron Man" movies. JARVIS is often cited as an example of a sophisticated natural language AI interface that can understand and respond to complex human commands.
Jupyter Notebook
An open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text. It is widely used in AI research and development for interactive coding and data analysis.

Section K.containing2 terms

K-Nearest Neighbors (KNN)
A simple machine learning algorithm used for classification and regression. It classifies data points based on the majority class among their nearest neighbors in the dataset.
Knowledge Graph
A structured way to represent knowledge as a network of entities and relationships between them. Knowledge graphs are used to store and reason about complex information, and can be used in AI for tasks like semantic search and question answering.

Section L.containing4 terms

Large Language Model (LLM)
A type of neural network with a very large number of parameters, trained on massive amounts of text data. LLMs are designed to understand, generate, and manipulate human language, and are the foundation for many modern AI applications.
Latency
The delay or lag between an input and the corresponding output in an AI system. Low latency is important for real-time applications where immediate responses are needed.
Learning Rate
A setting in machine learning training that controls how much the model adjusts its parameters in response to estimated errors during each training step. It affects how quickly and effectively the model learns.
LSTM (Long Short-Term Memory)
A type of recurrent neural network architecture that is particularly good at learning from sequential data, like text or time series data. LSTMs can remember information over long sequences, making them useful for natural language processing.

Section M.containing27 terms

Machine Consciousness
The hypothetical possibility of AI systems developing subjective awareness and self-awareness, similar to human consciousness. This is a topic of ongoing debate and research in AI and philosophy.
Machine Ethics
A field of study focused on designing AI systems that can make ethical decisions and behave in morally acceptable ways. It addresses how to program values and ethical principles into AI.
Machine Learning
A core field of AI that focuses on enabling computers to learn from data without being explicitly programmed. Machine learning algorithms allow systems to improve their performance on a task as they are given more data.
Machine Learning Automation
Using automation techniques to streamline and optimize the process of developing, training, and deploying machine learning models. This can include automating data preparation, model selection, and hyperparameter tuning.
Machine Learning Engineer (MLE)
A technical role focused on the practical application of machine learning to build, deploy, and maintain AI systems in real-world applications. Machine Learning Engineers are responsible for developing, training, and optimizing machine learning models, as well as ensuring their scalability, reliability, and performance in production environments.
Macro Automation
Automating large-scale, high-level business processes that span across multiple systems, departments, and workflows within an organization. Macro automation focuses on end-to-end process optimization.
Maintenance Contracts
Agreements that provide ongoing support, updates, and performance monitoring for AI systems. These contracts ensure that AI systems continue to function effectively and are kept up-to-date over time.
Matrix, The
A film series depicting a dystopian future where reality as perceived by most humans is actually a simulated reality created by machines. "The Matrix" explores themes of artificial reality, machine consciousness, and the nature of human-AI relationships.
Mean Square Error (MSE)
A common metric used to evaluate the performance of regression models in machine learning. MSE measures the average squared difference between the predicted values and the actual values, indicating the accuracy of the model's predictions.
Memory Bandwidth
The rate at which data can be transferred to or from memory in a computer system. High memory bandwidth is crucial for AI model performance, especially for models that require access to large amounts of data quickly.
Meta-Agent
An AI agent that is designed to manage and coordinate other AI agents. Meta-agents can orchestrate complex tasks by delegating sub-tasks to other agents and managing their interactions.
Metadata
Data that provides information about other data. Metadata describes the characteristics, context, and usage of data, and is essential for data management, discovery, and governance.
Microscripting
Creating small, automated scripts to handle very specific, repetitive tasks within larger automated workflows. Microscripting helps to break down complex automation into manageable steps.
MLOps (Machine Learning Operations)
A set of practices that combines machine learning, DevOps, and data engineering to reliably and efficiently deploy, manage, and monitor machine learning models in production. MLOps aims to streamline the entire ML lifecycle for robust and scalable AI systems.
Model
In machine learning, a model is a mathematical representation of patterns learned from training data. It is used to make predictions or decisions on new data. Models are the core component of AI systems.
Model Architecture
The design and structure of an AI model, particularly neural networks. It defines how the model is organized, including the types and arrangement of layers and components. The architecture significantly impacts a model's capabilities.
Model Compression
Techniques used to reduce the size and computational demands of AI models, making them more efficient to deploy and run, especially on devices with limited resources. Compression can involve methods like quantization and pruning.
Model Deployment
The process of making a trained AI model available for use in real-world applications. This involves integrating the model into a production environment where it can receive inputs and generate outputs.
Model Drift
The phenomenon where the performance of a deployed AI model degrades over time. This is often due to changes in the real-world data that the model encounters, which may differ from the data it was trained on.
Model Evaluation
The process of assessing the performance and effectiveness of an AI model using various metrics and techniques. Evaluation is crucial for understanding a model's strengths and weaknesses and for comparing different models.
Model Hosting
Providing the infrastructure and services needed to make AI models accessible for inference. This often involves cloud platforms and specialized hosting solutions that ensure models are available and scalable.
Model Monitoring
Continuously tracking the performance and behavior of AI models that are deployed in production. Monitoring helps to detect issues like model drift, performance degradation, and unexpected errors, ensuring ongoing reliability.
Model Registry
A centralized system for storing, versioning, and managing AI models and related artifacts. It provides a repository for models, making it easier to track, share, and deploy models in a controlled manner.
Model Serving
The process of delivering predictions or outputs from a deployed AI model in response to incoming requests. Model serving systems are designed to handle requests efficiently and reliably in a production environment.
Model Versioning
Keeping track of different versions of AI models throughout their lifecycle. Versioning is important for managing changes, rolling back to previous versions if needed, and ensuring reproducibility of AI systems.
Multi-Agent System
An AI system that consists of multiple AI agents that interact with each other to solve complex problems or achieve common goals. These agents can collaborate, compete, or coordinate their actions.
Multi-Modal AI
AI systems that can process and understand multiple types of data input, such as text, images, audio, and video, simultaneously. This allows for a richer and more comprehensive understanding of information.

Section N.containing4 terms

Natural Language Processing (NLP)
A field within AI that focuses on enabling computers to understand, interpret, and generate human language. NLP powers applications like chatbots, translation tools, and voice assistants, bridging the gap between human communication and computer understanding.
Neural Network
A foundational computing system in deep learning, inspired by the human brain. Neural networks consist of interconnected nodes (neurons) in layers that process information, learn patterns from data, and are essential for complex AI tasks like image and speech recognition.
Normalization
A data preparation technique used to scale numerical data to a standard range, like 0 to 1. Normalization ensures that all features contribute equally during model training, preventing features with larger values from disproportionately influencing the learning process.
Null Hypothesis
In statistical hypothesis testing, the null hypothesis is a statement that there is no effect or relationship in the data being studied. It's the default assumption that researchers try to disprove by finding evidence for an alternative hypothesis.

Section O.containing4 terms

Optimization
The process of adjusting an AI model's internal settings (parameters) to improve its performance on a specific task, like image recognition or language translation. Algorithms like gradient descent are used to find the best parameter values that minimize errors.
Orchestration
The automated coordination and management of complex IT systems, applications, and services, especially in distributed environments. In AI, orchestration is crucial for managing workflows, data pipelines, and the deployment of models across different computing resources.
Outlier
A data point that is significantly different from the other data points in a dataset, standing far apart from the general distribution. Outliers can be caused by errors in data collection or represent genuine, but rare, events, and can sometimes skew AI model training.
Overfitting
A common problem in machine learning where a model learns the training data too well, including its noise and random fluctuations. An overfit model performs excellently on the training data but poorly on new, unseen data because it fails to generalize effectively.

Section P.containing21 terms

P-Value
In statistical hypothesis testing, the p-value is a number that indicates the probability of observing results as extreme as, or more extreme than, those obtained, *if* the null hypothesis were actually true. A low p-value (typically below 0.05) is often considered evidence against the null hypothesis.
Pandas
A powerful and popular Python library essential for data manipulation and analysis in AI and data science. Pandas provides easy-to-use data structures like DataFrames, making it efficient to clean, process, and analyze structured data for machine learning tasks.
Paperclip Maximiser
A famous thought experiment illustrating the potential dangers of misaligned AI goals, especially for ASI. It imagines an AI single-mindedly optimized to maximize paperclip production, potentially at the expense of all other values, highlighting the need for careful AI value alignment.
Parallel Processing
Simultaneous execution of computations across multiple processors or cores within a computer system. Parallel processing is essential for handling the computationally intensive workloads of AI, especially for training large models and processing massive datasets quickly.
Parameter
A variable within an AI model that the model learns from training data. Parameters, such as weights in neural networks, determine how the model makes predictions and are adjusted during training to optimize the model's performance.
Parameter-Efficient Fine-Tuning (PEFT)
Techniques for fine-tuning large AI models that efficiently adapt them to new tasks by updating only a small subset of the model's parameters. PEFT methods are crucial for reducing the computational cost and resource demands of customizing massive models.
Pattern Recognition
The capability of AI systems to automatically identify, classify, and interpret recurring patterns within complex datasets. Pattern recognition is fundamental to many AI applications, enabling tasks like image and speech recognition, fraud detection, and medical diagnosis.
Personhood
The complex philosophical and legal concept defining what it means to be a "person," including rights, responsibilities, and moral status. In AI discussions, personhood is debated in relation to advanced AI systems and whether they could or should be considered persons with rights.
Philosophy of AI
A branch of philosophy that critically examines the nature, implications, and ethical considerations of AI. It explores profound questions about consciousness, intelligence, ethics, and the future relationship between humans and increasingly capable machines.
Pipeline Automation
Automating sequential processes where the output of one step directly feeds into the next, creating a streamlined flow. Pipeline automation is common in software development, data processing, and manufacturing, improving efficiency and reducing manual intervention in multi-stage operations.
Power Usage Effectiveness (PUE)
A key metric for measuring the energy efficiency of data centers, especially those powering AI. PUE is calculated as the ratio of total data center energy to IT equipment energy; a lower PUE indicates greater energy efficiency and reduced environmental impact.
Predictive Automation
Using AI to anticipate future outcomes and automatically trigger actions based on these predictions, enabling proactive and preemptive responses. Predictive automation goes beyond reactive automation, allowing systems to act intelligently in advance of events.
Pretendtelligence
A humorous and critical term for AI systems that give the *impression* of intelligence but rely on superficial methods like simple pattern matching or clever marketing, rather than genuine reasoning or understanding. It points to AI that is "pretending" to be intelligent.
Process Automation
Utilizing technology to automate routine, repeatable business processes, such as data entry, invoice processing, or customer support ticketing. Process automation aims to improve efficiency, reduce errors, and free up human employees for more strategic tasks.
Process Mining
A data science technique that uses data from event logs to discover, monitor, and improve real-world processes. Process mining provides insights into how processes *actually* work, enabling organizations to identify bottlenecks and opportunities for automation or optimization.
Production Environment
The live, real-world setting where AI models and applications are deployed and actively used by end-users to solve problems or deliver services. The production environment requires robust infrastructure, monitoring, and maintenance to ensure reliability and performance in real-world conditions.
Prompt
In the context of LLMs, a prompt is the input text or instructions given to the model to elicit a desired response. Effective prompt engineering is crucial for guiding LLMs to generate useful, relevant, and high-quality outputs.
Prompt Drift
The phenomenon where the output quality or behavior of a LLM gradually changes or degrades over time, even when using the same prompt. Prompt drift necessitates ongoing monitoring and adjustment of prompts to maintain consistent and desired results from LLMs.
Prompt Engineering
The skill and art of designing effective and well-structured prompts to elicit desired, high-quality, and specific outputs from AI models, especially LLMs. Good prompt engineering is key to unlocking the full potential and controlling the behavior of powerful language models.
Prompt Injection
A serious security vulnerability in LLMs where malicious or adversarial prompts are crafted to manipulate the model into performing unintended actions or bypassing intended instructions or security measures. Prompt injection can lead to unintended actions, data breaches, or the generation of harmful content by the AI.
Prompt Template
A pre-designed, reusable format or structure for creating prompts, often with placeholders for specific inputs. Prompt templates help ensure consistency in prompting, streamline prompt creation, and make it easier to adapt prompts for different inputs or tasks.

Section Q.containing1 term

Quantization
A technique for reducing the computational and memory demands of AI models by decreasing the precision of their numerical representations. For example, using 8-bit integers instead of 32-bit floats. Quantization makes models smaller, faster for inference, and more efficient to deploy on resource-constrained devices.

Section R.containing13 terms

R-Squared
A statistical measure, also known as the coefficient of determination, that indicates how well a regression model "fits" the data. R-squared represents the proportion of the variance in the dependent variable that is predictable from the independent variables; a higher R-squared generally indicates a better model fit.
R2-D2
The iconic astromech droid from the "Star Wars" franchise, widely recognized as a beloved example of AI in popular culture. R2-D2 showcases practical AI capabilities in robotics, problem-solving, and loyalty, embodying a helpful and resourceful AI companion.
RAG (Retrieval-Augmented Generation)
A technique to enhance the knowledge and factual accuracy of LLM responses. RAG works by first retrieving relevant information from external knowledge sources (like a database or the internet) and then using this retrieved information to inform and generate the LLM's response, making it more grounded and reliable.
Random Sampling
A fundamental statistical method for selecting a representative subset of data points from a larger dataset, where each data point has an equal chance of being chosen. Random sampling is crucial for ensuring unbiased data selection in machine learning, creating training, validation, or test sets that accurately reflect the overall data distribution.
Reasoning Model
An AI model specifically designed to perform logical reasoning, inference, and problem-solving tasks that require more than just pattern matching. Reasoning models aim to mimic human-like deductive or inductive reasoning, enabling AI to tackle complex, knowledge-rich challenges.
Recurrent Neural Network (RNN)
A type of neural network architecture specifically designed to process sequential data, such as text, time series, or audio. RNNs have feedback connections that allow them to maintain a "memory" of past inputs, making them suitable for tasks where context and sequence order are crucial.
Red Queen Hypothesis
Borrowing from "Through the Looking-Glass," this concept in AI describes the idea that AI systems and their environment are in a constant state of competitive co-evolution, like a perpetual race. It suggests that AI must continuously improve and adapt just to maintain its level of performance in a dynamic and evolving landscape.
Regression
A fundamental type of machine learning task where the goal is to predict a continuous numerical output value, such as predicting house prices, stock values, or temperature. Regression models learn the relationship between input features and a continuous target variable, allowing them to estimate numerical outcomes.
Reinforcement Learning
A distinct type of machine learning where an AI agent learns to make optimal decisions in an environment by trial and error to maximize a reward signal. Reinforcement learning is inspired by behavioral psychology and is particularly effective for training AI in complex environments, like games or robotics.
Replicant
In the movie "Blade Runner," replicants are bioengineered artificial humans that are virtually indistinguishable from natural humans. Replicants serve as a powerful fictional exploration of AI consciousness, identity, and the ethical implications of creating advanced AI beings.
Responsible AI
An overarching approach to developing, deploying, and using AI systems in a way that is ethical, fair, transparent, safe, and accountable. Responsible AI emphasizes the importance of considering the societal impacts of AI and mitigating potential risks and harms throughout the AI lifecycle.
Robotic Process Automation (RPA)
A technology that uses software "robots" or bots to automate repetitive, rule-based tasks traditionally performed by humans, especially in office environments. RPA is often used to automate back-office processes like data entry, form processing, and system interactions, improving efficiency and accuracy.
Rules-Based Automation
Automation systems that operate based on a predefined set of explicit rules and logic created by humans. Rules-based automation is effective for tasks that are well-defined, predictable, and follow clear, unchanging procedures, but lacks the adaptability of AI-driven automation.

Section S.containing31 terms

Safety of AI
The field dedicated to ensuring AI systems are developed and used responsibly, reliably, and ethically. Safety of AI focuses on preventing unintended harm, aligning AI with human values, and maintaining control as AI becomes more advanced and capable.
Sample Size
The number of individual data points or examples included in a dataset that is used for training or evaluating AI models. A sufficient sample size is essential for ensuring that AI models are robust, accurate, and generalize well to new, unseen data.
Scaling Automation
The capability of automation systems to handle increasing workloads, larger amounts of data, or wider deployments across an organization without significant performance degradation. Scaling automation is crucial for businesses to effectively expand and adapt their automated processes as they grow.
Scaling Laws
Empirical relationships observed in Large Language Models (LLMs) that demonstrate predictable improvements in model performance as model size, training data quantity, and compute resources are increased. Scaling laws provide insights into the potential and future development of increasingly powerful AI models.
Scripting Automation
A foundational approach to automation that involves writing scripts – short programs in languages like Python – to automate repetitive IT and data processing tasks. Scripting automation is often used for tasks like system administration, file management, and automating software deployments.
Self-Attention
A key mechanism in transformer neural networks that allows the model to focus on the most relevant parts of the input data when processing it, similar to how human attention works. Self-attention is crucial for enabling AI models to understand context and relationships effectively, particularly in natural language and image processing.
Self-Healing Automation
Advanced automation systems engineered to automatically detect, diagnose, and recover from errors or unexpected disruptions with minimal or no human intervention. Self-healing automation significantly improves the resilience and reliability of automated processes, reducing downtime and maintenance needs.
Self-Improving AI
Hypothetical AI systems with the theoretical ability to autonomously improve their own cognitive capabilities, performance, or design over time through recursive self-enhancement. Self-improving AI is a concept with profound implications for the future of technology and raises critical questions about control and safety.
Self-Supervised Learning
Machine learning techniques that enable AI models to learn from vast amounts of unlabeled data by generating their own supervision signals during training. Self-supervised learning is particularly valuable for leveraging the abundance of unlabeled data to create powerful AI models, reducing reliance on costly labeled datasets.
Semiconductor
A category of materials with electrical conductivity properties intermediate between conductors and insulators, making them fundamental to modern electronics and computing. Semiconductors, with silicon being a prime example, are essential for manufacturing the microchips and transistors that power AI hardware and computing devices.
Service as Software
An emerging business model that utilizes AI and automation to transform traditional services into scalable, software-based products that can be delivered more efficiently. Service as software offers the potential to scale service delivery, improve consistency, and reduce costs compared to traditional human-centric service models.
Shot Prompting
In Large Language Models (LLMs), shot prompting refers to techniques for designing prompts that include a limited number of examples or demonstrations to guide the LLM towards generating the desired type of output. Common shot prompting techniques include zero-shot, one-shot, and few-shot prompting.
Singularity
Often referred to as the Technological Singularity, it is a speculative future point in time when technological advancement becomes uncontrollable and exponential, largely influenced by the advent of superintelligence. The singularity represents a hypothetical horizon beyond which predicting future technological and societal developments becomes extremely challenging.
Skynet
A fictional AI system from the "Terminator" film series that gains sentience and becomes an antagonist to humanity, often cited as a cautionary example in discussions about AI risks. Skynet represents a popular cultural reference for the potential dangers of uncontrolled AI and the critical importance of AI safety considerations.
Slop
An informal and critical term for low-quality, inaccurate, or undesirable outputs generated by AI systems, particularly Large Language Models (LLMs). "Slop" is often attributed to factors such as insufficient training data, biases in datasets, or limitations in prompt engineering techniques.
Smart Automation
Automation systems enhanced by AI technologies like machine learning, computer vision, and natural language processing to enable more intelligent and adaptive task execution. Smart automation allows systems to handle complex, dynamic, and less structured tasks that go beyond the capabilities of traditional rule-based automation.
Smoke and Mirrors AI
A critical term used to describe AI products or demonstrations that create a deceptive *appearance* of advanced AI capabilities, often through clever interfaces or staged presentations, without possessing genuine AI functionality or underlying intelligence. "Smoke and mirrors AI" suggests a superficial or exaggerated representation of AI prowess.
Snake Oil
A highly critical and derogatory term for AI products or services that are fraudulently or misleadingly marketed with exaggerated or false claims of capability and effectiveness. "Snake oil AI" is used to describe deceptive marketing that exploits the hype and public interest in AI for commercial gain, often without delivering real value.
Software as a Service (SaaS)
A widely adopted software distribution model where software applications are hosted by a service provider and made accessible to users over the internet, typically on a subscription basis. SaaS has become a prevalent delivery method for AI-powered tools and platforms, offering scalability, accessibility, and ease of deployment for users.
Standard Deviation
A fundamental statistical measure that indicates the degree of dispersion or variability within a dataset, quantifying how much individual data points deviate from the dataset's mean. Standard deviation is a crucial measure for understanding the spread and distribution of data in statistical analysis and machine learning.
Standard Operating Procedures (SOP)
Standard Operating Procedures (SOPs) are documented, step-by-step instructions that outline how to perform routine tasks or processes consistently. In AI and automation, SOPs are crucial for ensuring consistent, reliable, and compliant operation of AI systems, covering areas like data handling, model deployment, and ethical guidelines.
Statistical Significance
In statistical hypothesis testing, statistical significance is a measure of the likelihood that an observed result is not simply due to random chance, but reflects a genuine relationship or effect within the data. Researchers use statistical significance to determine the reliability and validity of their findings when analyzing data or evaluating machine learning models.
Stochastic Parrot
A critical and evocative term used to characterize Large Language Models (LLMs) as systems that, while capable of generating fluent and human-like text, do so by statistically piecing together patterns from their training data, without demonstrating genuine understanding, intentionality, or consciousness. The "stochastic parrot" concept underscores ongoing debates about the nature of intelligence in current AI.
Stream Processing
A data processing paradigm focused on ingesting, analyzing, and acting upon data in real-time, continuously as it is generated, enabling immediate responses and insights. Stream processing is essential for applications that demand real-time data analysis and decision-making, such as financial trading systems, cybersecurity threat detection, and industrial monitoring.
Strong AI
Often used synonymously with AGI, Strong AI represents a hypothetical future level of AI characterized by general-purpose intelligence and a wide range of cognitive capabilities at least matching, or exceeding, human abilities across diverse domains. Strong AI is often associated with the potential for machine consciousness, sentience, and true understanding.
Superintelligence
A hypothetical future form of AI exhibiting cognitive abilities and intellectual capacity far surpassing that of the most intelligent humans across all domains of thought and creativity. The concept of superintelligence is central to discussions about long-term AI risks, ethical considerations, and the potential for transformative societal impacts.
Supervised Learning
A foundational category of machine learning algorithms where models are trained on datasets that are explicitly labeled with the correct outputs or target values for given inputs. In supervised learning, the AI learns from these labeled examples to generalize and make accurate predictions or classifications on new, unlabeled data.
Synthetic Biology
An interdisciplinary field at the convergence of biology, engineering, and computer science that focuses on designing and constructing novel biological entities, systems, and processes with enhanced or artificial functions. Synthetic biology offers potential synergies with AI in areas such as drug discovery, biomaterials, and the development of bio-inspired computing paradigms.
Synthetic Data
Data generated artificially through algorithms or simulations, rather than derived from direct observation of the real world. Synthetic data is increasingly utilized in AI to augment or replace real-world datasets, particularly to address challenges related to data scarcity, privacy restrictions, or the need for diverse and controlled training examples.
System Instructions
Also known as "system prompts" or "meta prompts", system instructions are carefully formulated directives provided to Large Language Models (LLMs) to govern their behavior, output style, and task execution parameters. Well-crafted system instructions are key to effectively controlling, customizing, and aligning LLMs for specific applications and use cases.

Section T.containing15 terms

Task Automation
Automating specific, well-defined tasks that are often repetitive and rule-based, such as data entry or report generation, typically within a larger process or workflow. Task automation focuses on improving efficiency and accuracy at a granular level, by automating individual steps rather than entire processes.
Technological Singularity
See Singularity. The technological singularity is a hypothetical point in the future when artificial intelligence becomes capable of recursive self-improvement, leading to runaway technological growth and unpredictable changes to human civilization.
Temperature Settings
A parameter in Large Language Models (LLMs) that controls the randomness and creativity of the text generated by the model. Higher temperature settings make the output more diverse and surprising, while lower settings make it more focused and predictable.
Test Automation
The use of specialized software tools and scripts to automatically execute software tests, compare results to expected outcomes, and generate test reports, all without manual human intervention. Test automation is critical for ensuring software quality, improving testing speed and coverage, and supporting agile development practices.
Three Laws of Robotics
A set of three ethical rules for robots and artificial intelligence, formulated by science fiction author Isaac Asimov, designed to ensure robots would be safe and beneficial to humans. While fictional, the Three Laws of Robotics have been highly influential in discussions about AI ethics and AI safety, prompting ethical considerations for real-world AI development.
Throughput
In AI and computing, throughput refers to the amount of data or the number of tasks that a system can process within a given time period, often measured in operations per second or data processed per minute. High throughput is a key performance indicator for AI systems, especially for applications requiring rapid processing of large volumes of data or user requests.
Time Series
Data points indexed in chronological order, representing measurements or observations taken sequentially over time at regular intervals. Time series data is common in many domains, including finance, econometrics, and IoT sensor readings, and requires specialized analytical techniques, including AI methods, for forecasting, trend analysis, and anomaly detection.
Token
In Natural Language Processing (NLP) and Large Language Models (LLMs), a token is the smallest unit of text that a model processes when understanding or generating language. Tokens can be individual words, parts of words, or even single characters, depending on the tokenization method used.
TPU (Tensor Processing Unit)
Tensor Processing Units (TPUs) are custom-designed hardware accelerators developed by Google to significantly accelerate machine learning workloads, particularly for deep learning. Optimized for tensor algebra, TPUs offer substantial performance and energy efficiency improvements compared to CPUs and GPUs for many AI computations, especially during model training and inference.
Training Cost
The overall resources required to train an AI model, encompassing computational infrastructure costs (like cloud computing or specialized hardware), energy consumption, engineering time, and data acquisition expenses. Training cost for large, state-of-the-art AI models can be extremely high, representing a significant barrier to entry and a key consideration in AI development.
Training Data
The dataset used to train a machine learning model, consisting of input examples and their corresponding desired outputs or labels. The quality, quantity, and representativeness of training data are paramount for the success of any supervised machine learning model, as the model learns directly from the patterns and relationships within this data.
Transfer Learning
Using knowledge learned in one task to improve performance on a different but related task.
Transformer
A neural network architecture using attention mechanisms, commonly used in language models.
Trigger-Based Automation
Automation initiated by specific events or conditions, such as receiving an email.
Turing Test
A test of machine intelligence where a human evaluator must distinguish between responses from a human and an AI.

Section U.containing4 terms

Uncanny Valley
A psychological concept describing the unsettling or revulsive feeling that humans can experience when they encounter artificial representations of humans, like robots or CGI characters, that appear almost, but not perfectly, realistic. The closer to human-like these representations become without achieving *perfect* realism, the stronger the feeling of unease tends to be.
Underfitting
A common issue in machine learning where a model is too simplistic or insufficiently trained to accurately capture the underlying patterns and complexities within the training data. An underfit model typically exhibits poor performance, not only on unseen data but also on the training data itself, indicating it is not learning effectively.
Unsupervised Learning
A branch of machine learning where algorithms learn from unlabeled data, without explicit guidance from pre-defined correct answers or labels. In unsupervised learning, the AI must independently discover patterns, structures, and groupings within the data, making it useful for tasks like clustering similar data points or reducing data dimensionality.
User Prompt
In interactive AI systems, especially Large Language Models (LLMs), a user prompt is the input, question, or instruction provided by a human user to the AI to initiate a response or guide its behavior. The design and clarity of the user prompt are critical factors in determining the quality, relevance, and usefulness of the AI-generated output.

Section V.containing8 terms

Validation Data
A portion of data held back from the training data and used to evaluate the performance of a machine learning model during its training phase. Validation data helps to fine-tune the model's settings and prevent overfitting, ensuring it generalizes well to new, unseen data beyond the training set.
Value Stream Automation
Applying automation technologies to optimize the entire sequence of activities, or "value stream," that an organization undertakes to deliver a product or service to a customer. Value stream automation aims to streamline the complete process from customer request to value delivery, eliminating waste and maximizing efficiency across all stages.
Vaporware AI
A critical term describing AI products or services that are heavily promoted and marketed but are either never actually released or fail to function as advertised. "Vaporware AI" often capitalizes on the hype surrounding AI to generate interest and investment, without delivering real, functional technology.
Variance
In statistics, variance is a measure of how spread out or dispersed a set of data points are from their average value (the mean). A high variance indicates that the data points are widely scattered, while a low variance indicates they are clustered closely around the mean.
Vector Database
A specialized type of database designed for efficient storage and retrieval of vector embeddings, which are numerical representations of data used in AI. Vector databases excel at performing similarity searches across these high-dimensional vectors, making them essential for applications like recommendation engines, semantic search, and image recognition.
Virtual Assistant
Software applications, often powered by AI, designed to assist users with tasks, provide information, and automate simple processes through voice or text-based interfaces. Popular virtual assistants include Siri, Alexa, and Google Assistant, which can handle tasks like setting reminders, playing music, and answering questions.
Vision Transformer
A type of transformer neural network architecture that has been adapted for computer vision tasks, such as image recognition and image analysis. Vision transformers apply the self-attention mechanism, originally designed for language, to effectively process and understand visual information in images.
VRAM (Video RAM)
Video Random Access Memory (VRAM) is a dedicated type of high-speed memory used in Graphics Processing Units (GPUs) that is specifically designed for graphics and display processing. In AI, VRAM on GPUs is essential for efficiently handling the large datasets and complex computations involved in training and running AI models, especially for tasks like image and video processing.

Section W.containing7 terms

WALL-E
WALL-E is the главн герой of a Pixar animated film of the same name; a robot responsible for cleaning up a deserted, garbage-filled Earth. WALL-E is a popular and endearing example of AI in popular culture, exploring themes of environmentalism, machine consciousness, and the potential for machines to exhibit human-like emotions and values.
Washing
In the context of AI and related fields, "washing" refers to misleading marketing or branding that exaggerates or misrepresents the extent to which a product, service, or company actually utilizes AI technologies. Terms like "AI washing" or "greenwashing" describe practices where claims of AI-powered features or ethical/sustainable practices are superficial or unsubstantiated.
Weights
In neural networks, weights are numerical parameters that represent the strength of connections between artificial neurons (nodes). Weights are adjusted during the training process to enable the network to learn patterns and relationships in data, with higher weights indicating stronger influence of one neuron on another.
White Box
In AI and machine learning, a "white box" model refers to a model whose internal workings and decision-making processes are transparent and easily understandable to humans. White box models, such as decision trees or rule-based systems, allow users to see *how* the model arrives at its outputs, contrasting with opaque black box models.
Winter
In the history of AI, an "AI Winter" denotes a period of reduced funding, diminished research progress, and decreased public and industry interest in artificial intelligence. AI Winters typically follow periods of inflated expectations and unmet promises in the field, leading to a temporary slowdown in AI development.
WOPR
WOPR (War Operation Plan Response) is the name of the AI computer system in the 1983 film "WarGames." WOPR famously learns through interaction and eventually concludes that nuclear war is a game with no winners, becoming a cultural reference point for discussions about AI learning, unintended consequences, and AI ethics.
Workflow Orchestration
Workflow orchestration involves automating and managing the execution of complex, multi-step workflows, where tasks are coordinated across different systems and applications. In automation and AI, workflow orchestration ensures that automated processes are executed in the correct sequence, dependencies are managed, and overall business processes are streamlined.

Section X.containing1 term

XAI (Explainable AI)
XAI stands for Explainable AI, which refers to artificial intelligence systems designed to provide clear, transparent, and understandable explanations for their decisions and actions. The goal of XAI is to make AI more interpretable to humans, fostering trust and accountability, particularly in critical applications.

Section Y.containing1 term

YAML (YAML Ain't Markup Language)
YAML is a human-readable data serialization format that is often used to write configuration files or exchange data between systems in a way that is easy for humans to read and write. In AI and machine learning, YAML is frequently used to define data pipelines, model configurations, and deployment settings.

Section Z.containing1 term

Zero-Shot Learning
Zero-shot learning is a capability in advanced AI models, especially Large Language Models (LLMs), that enables them to perform tasks or recognize categories they have never been explicitly trained on. This means the AI can generalize and apply its knowledge to entirely new situations or instructions, without prior examples specific to those situations.