Last updated: 2026-02-23

GLOSSARY

72+ terms explained for AI-assisted development

> AI Fundamentals

Context Window
The maximum amount of text (measured in tokens) that an AI model can process in a single interaction, including both the input prompt and the generated output.
Tokens
The basic units of text that AI models process, typically representing words, parts of words, or individual characters.
Prompt Engineering
The practice of crafting effective instructions and context for AI models to produce better, more accurate outputs.
RAG (Retrieval-Augmented Generation)
A technique that enhances AI responses by first retrieving relevant information from a knowledge base, then using it as context for generation.
Embeddings
Numerical vector representations of text that capture semantic meaning, enabling similarity search and retrieval.
LLM (Large Language Model)
A neural network trained on vast amounts of text data that can understand and generate human language and code.
Fine-Tuning
The process of further training a pre-trained AI model on a specific dataset to improve its performance on particular tasks.
Inference
The process of using a trained AI model to generate predictions or outputs from new input data.
Temperature
A parameter that controls the randomness of AI model outputs, where lower values produce more deterministic results and higher values produce more creative outputs.
Top-P (Nucleus Sampling)
A parameter that controls output diversity by limiting token selection to the smallest set of tokens whose cumulative probability exceeds a threshold P.
Chain-of-Thought
A prompting technique where the AI model is encouraged to show its reasoning step by step before arriving at a final answer.
Few-Shot Learning
Providing a small number of examples in the prompt to help the AI model understand the desired output format and pattern.
Zero-Shot Learning
The ability of an AI model to perform a task correctly without any examples, relying solely on its training and the task description.
Transformer
The neural network architecture that underlies modern LLMs, using self-attention mechanisms to process sequences of tokens in parallel.
Attention Mechanism
A component of transformer models that allows the model to focus on different parts of the input when generating each part of the output.
Hallucination
When an AI model generates plausible-sounding but factually incorrect information, such as non-existent APIs, wrong function signatures, or fabricated library features.
Grounding
Techniques that anchor AI outputs to factual, verifiable information sources, reducing hallucination and improving accuracy.
System Prompt
Instructions provided to an AI model that set its behavior, personality, and constraints for the entire conversation.
User Prompt
The input message from the human user to the AI model, containing questions, instructions, or context for the desired response.
Assistant Message
The AI model's response to a user prompt, which may include text explanations, code, tool calls, or a combination.
Latency
The time delay between sending a request to an AI model and receiving the first response, affecting the responsiveness of coding tools.
Throughput
The number of tokens or requests an AI system can process per unit of time, determining how much work can be done in parallel.

> AI Coding

AI Agents
Autonomous AI systems that can plan, execute multi-step tasks, use tools, and make decisions to achieve goals without constant human guidance.
MCP (Model Context Protocol)
An open protocol that standardizes how AI models connect to external data sources and tools, enabling richer context and capabilities.
Hooks
Event-driven callbacks that execute custom code when specific actions occur in a system, enabling extensibility and monitoring.
Tool Use
The ability of AI models to call external functions and tools to perform actions beyond text generation, such as reading files, running code, or searching the web.
Function Calling
A structured way for AI models to invoke predefined functions with typed parameters, enabling reliable integration with external systems.
Code Completion
AI-powered suggestions that predict and complete code as you type, ranging from single-line completions to multi-line blocks.
Code Generation
Using AI to create entire code blocks, functions, classes, or applications from natural language descriptions or specifications.
AI Pair Programming
A development practice where a human developer works collaboratively with an AI coding assistant, sharing context and iterating on code together.
Copilot
An AI-powered coding assistant that works alongside developers in their IDE, providing suggestions, completions, and explanations.
Agentic Coding
A development paradigm where AI agents autonomously write, test, and iterate on code with minimal human intervention.
Multi-Agent Systems
Systems where multiple AI agents work simultaneously on related tasks, coordinating to achieve complex goals.
Orchestration
The automated coordination and management of multiple processes, services, or agents to achieve a desired outcome.
Session Management
The creation, monitoring, and control of individual AI coding sessions, each running in an isolated environment.
Real-Time Monitoring
The continuous observation of system activity with instant updates, enabling immediate awareness of status changes and events.
Model Context Protocol
An open standard by Anthropic for connecting AI models to external data sources and tools through a unified protocol.
Anthropic API
The programming interface provided by Anthropic for accessing Claude models, enabling developers to build AI-powered applications.
OpenAI API
The programming interface provided by OpenAI for accessing GPT models, DALL-E, and other AI services.
Streaming
A data delivery method where responses are sent incrementally as they're generated, rather than waiting for the complete response.
Rate Limiting
Restricting the number of API requests a client can make within a time period to prevent abuse and ensure fair resource distribution.
Cost Optimization
Strategies for reducing the cost of AI API usage while maintaining output quality, including model selection, prompt optimization, and caching.

> Development

LSP (Language Server Protocol)
A protocol that standardizes communication between code editors and language servers, providing features like autocomplete, diagnostics, and go-to-definition.
AST (Abstract Syntax Tree)
A tree representation of the syntactic structure of source code, where each node represents a code construct like a function, variable, or expression.
Linting
Automated analysis of source code to flag programming errors, bugs, stylistic issues, and suspicious constructs.
Static Analysis
Analyzing source code without executing it to find bugs, security vulnerabilities, and code quality issues.
Refactoring
Restructuring existing code to improve its design, readability, or performance without changing its external behavior.
Technical Debt
The accumulated cost of shortcuts, workarounds, and suboptimal code decisions that make future development slower and riskier.
Code Review
The systematic examination of source code changes by peers or automated tools to find bugs, improve quality, and share knowledge.
IDE (Integrated Development Environment)
A software application that provides comprehensive facilities for software development, including a code editor, debugger, and build tools.
CLI (Command Line Interface)
A text-based interface for interacting with software through typed commands, offering powerful automation and scripting capabilities.
Terminal
A text-based interface for accessing the operating system's command line, running programs, and managing files.
Git
A distributed version control system that tracks changes to files, enabling collaboration and history management in software projects.
Version Control
A system that records changes to files over time, allowing you to recall specific versions, compare changes, and collaborate with others.
Branching
Creating independent lines of development in version control, allowing parallel work on features, fixes, or experiments without affecting the main codebase.
Merge Conflict
A situation in version control where two branches have made conflicting changes to the same part of a file, requiring manual resolution.
Pull Request
A request to merge code changes from one branch into another, typically including a description, review process, and CI checks.
tmux
A terminal multiplexer that lets you run and manage multiple terminal sessions within a single window.

> DevOps

> Architecture

> Testing

FAQ

What topics does this glossary cover?

The glossary covers AI coding concepts including context windows, tokens, agents, tool use, MCP (Model Context Protocol), prompt engineering patterns, and other terms commonly encountered when working with AI coding assistants.

Are definitions aimed at beginners or experts?

Definitions are written for working developers who encounter these terms in practice. They focus on practical meaning rather than academic precision, with enough detail to understand how each concept affects your AI coding workflow.

How are glossary terms organized?

Terms are grouped by category (e.g., Core Concepts, Architecture, Workflows) so you can browse related terms together. Each term links to a dedicated page with expanded context, related terms, and practical examples.

Sources & Methodology

Definitions are curated for AI-assisted engineering contexts and aligned with how terms are used in practical coding workflows.

READY TO START? Live Orchestration

[ HIVEOS / LAUNCH ]

Orchestrate Your AI Coding Agents

Manage multiple Claude Code sessions, monitor progress in real-time, and ship faster with HiveOS.