Learn - AI context terminology

One reference for the terms that matter. Each section explains a term or category; “Used by” shows which vendors use that label.

Why this page

The AI context stack has many overlapping terms. Below we first list existing terms and who uses them; then we clearly separate and define the categories we introduce — packs and the two pack types — so you know what we mean when we use them.

Existing terms in the space

These terms are already used by vendors and the industry. "Used by" shows who uses that label.

Skills

Used by: Anthropic (Agent Skills), Cursor

Instructions and optional scripts that teach an AI how to perform specific tasks — like onboarding materials for a team member. Often packaged as a folder with a SKILL.md file (YAML metadata + markdown instructions). Used to encode organisational expertise and repeatable procedures so the model can follow them on demand.

Rules

Used by: Cursor

Project- or global-level instructions that customise how the AI behaves (e.g. coding style, when to run tests, how to name files). In Cursor, stored as .mdc files in .cursor/rules/, with metadata like globs and “always apply.” Conceptually similar to “skills” but tool-specific naming and format.

MCP (Model Context Protocol)

Used by: Anthropic (originator), Cursor, OpenAI, others

An open standard for connecting AI applications to external systems — sometimes described as “USB-C for AI.” Defines three primitives: Tools (callable functions), Resources (data sources the model can read), and Prompts (reusable task templates). Vendor-neutral: the same MCP server can be used by Claude, Cursor, or other clients that support the protocol.

Tools

Used by: OpenAI, Google, MCP (standard)

Callable functions or APIs that the model can invoke (e.g. search, code execution, database queries). In MCP, “tools” is one of the three primitives. Vendors also use “tools” for their own first-party capabilities (e.g. hosted search, code interpreter). So “tools” is both a generic concept and an MCP-specific term.

Plugins

Used by: Anthropic (Agent SDK)

Extension mechanism in a given SDK or product. In Anthropic’s Agent SDK, “plugins” are a way to add capabilities alongside MCP. The word is often used generically for “something you install to extend behaviour”; in this ecosystem it’s worth checking whether “plugin” means MCP-based or a product-specific extension.

Resources

Used by: MCP (standard)

In MCP, “resources” are data sources the model can read — e.g. files, database records, calendar events. Distinct from “tools” (which are callable functions). Resources expose content; tools perform actions. Other vendors may use different names for the same idea (e.g. “file search,” “URL context”).

Prompts (MCP)

Used by: MCP (standard)

In MCP, “prompts” are reusable task templates that a client can request from a server — pre-written prompt structures for common operations. Elsewhere, “prompt” usually means the text sent to the model in a single request; in MCP it specifically means these server-exposed templates.

RAG (Retrieval-Augmented Generation)

Used by: Industry standard term

A technique that combines the model’s built-in knowledge with external data retrieved at request time. The system “looks up” information from a store (e.g. documents, knowledge base) and injects it into the prompt so the model can answer from that context. Used to reduce hallucination and to use proprietary or up-to-date data.

Knowledge base

Used by: AWS, Azure, Contextual AI, others

A collection of documents or data used as the retrieval source for RAG. The “knowledge base” is the store that gets queried; the results are then fed into the model. Vendors use this term for their managed document collections and indexing pipelines.

Grounding

Used by: Google (Gemini), AWS, Azure

Ensuring model responses are based on provided or retrieved context (e.g. search results, documents), with citations where possible. “Grounding with Google Search” means the model uses live search results; “contextual grounding” often means verifying that answers are supported by the given sources. Reduces hallucination and improves verifiability.

Agents

Used by: OpenAI, Anthropic, others

Systems that take multi-step actions: they use tools, make decisions, and often loop until a task is done. An “agent” is typically the orchestration layer that calls the model and tools. Vendors use “agent” for different scopes (e.g. voice agents, coding agents); the common thread is autonomy and tool use.

Agentic workflows

Used by: Industry / research (e.g. IBM)

Multi-step, adaptive flows where agents coordinate tasks with tools and orchestration — often represented as graphs (e.g. DAGs) or code. The term emphasises structure and repeatability: “workflow” implies defined steps and control flow, not just ad-hoc prompting. Used in enterprise and research for production-grade agent systems.

Context

Used by: All vendors (different nuances)

Broadly, the information the model has when generating a response: the prompt, retrieved documents, conversation history, and sometimes tool results. OpenAI distinguishes “local context” (runtime state, not sent to the model) from “LLM context” (what the model sees). In product marketing, “context” can also mean the layer of organisational knowledge and process — the space where “context as code” sits.

Guardrails

Used by: AWS, Azure, third-party vendors

Safety and compliance checks on model inputs or outputs: filtering harmful content, enforcing format, or ensuring responses stay grounded in allowed sources. Often used together with RAG and grounding (e.g. “contextual grounding with guardrails”). The term is vendor-agnostic; implementations vary by platform.

Our categories

We introduce these terms. They describe how we package and deliver context so your team and your AI use the same standards and workflows.

Packs

We introduce this

Packs are installable bundles of context. They are system- and tool-agnostic: the same pack can be used with different AI tools (e.g. Cursor, Claude, MCP) or with a manual fallback. What makes something a pack is that it is a bundle — one coherent unit you install and maintain, like a package. We distinguish two types: agentic workflow packs and knowledge packs.

What are packs? →

Agentic workflow packs

We introduce this

An agentic workflow pack bundles codified processes and workflows with assets that help both humans and AI/agents use them. Think: step-by-step procedures, decision flows, or multi-step tasks — written down and packaged with templates, prompts, or tool configs so that people or agents can follow the same process. We use "agentic workflow" to align with the existing idea of agentic workflows (multi-step, tool-using flows) and to make clear that the pack encodes a workflow — not just static knowledge — that agents and humans can run.

Context as code

We introduce this

Context as code is our umbrella idea: organisational knowledge and process represented in a portable, versioned, testable way that can run across multiple AI ecosystems (Claude, Cursor, OpenAI, etc.), not locked to one vendor. Packs are how we deliver it — installable, tool-agnostic bundles. Analogous to “infrastructure as code” for deployment: context becomes something you define once and reuse everywhere.

Knowledge packs (context packs)

We introduce this

A knowledge pack (or context pack) bundles codified knowledge, practices, standards, and experience with assets that help both humans and AI/agents use them. Think: documentation methodology, API design standards, or domain concepts — written down and packaged with rules, skills, or templates so that people or agents apply the same standards. Like workflow packs, they are codified and bundled with assets for human and AI/agent utilisation; the focus here is on knowledge and standards rather than step-by-step workflows. “slice”

How we use the terms

We align with: We use “skills” for task-level behaviour, “MCP” for connections to tools and data, “agentic workflows” for multi-step procedures, and “RAG” / “knowledge base” / “grounding” where they fit. We don’t rename existing categories.

We define:Context as code” and “Knowledge packs” are our categories. We’re not “just” skills or RAG — we’re the portable layer that can feed skills, MCP, and workflows across vendors.