Learn - Movers and shakers

A lay of the land: the big players in the AI context space. For each we outline their position in the market, their aims, what they offer, and how we see their move to corner their facet — so you can see who targets which slice and who might fit you best.

Cursor

Cursor is an AI-native code editor built on VS Code. They sit at the intersection of the IDE and the model: the place where developers write code and where context (rules, rulesets, files) is most actionable.

Aims in the market

  • Own the in-IDE AI experience for professional developers.
  • Make repository and project context the default input for AI assistance.
  • Support a growing ecosystem of rules, skills, and MCP so teams can customise behaviour.

Offerings

  • ComposerMulti-file editing and long-running tasks with full repo context.
  • Rules and .mdc filesProject-level and task-level guidance (skills) stored in the repo.
  • MCP supportModel Context Protocol for tools, resources, and prompts.
  • Chat and inline editsConversational and inline AI assistance in the editor.

The move: We can see their move to corner the in-IDE context layer with their focus on repo-native rules and skills in the form of .cursor/rules and .mdc files, plus MCP for tools — so the same context can drive both Cursor and other MCP clients.

Cursor

Anthropic

Anthropic is an AI lab and the creator of Claude. They lead on safety, long context, and agentic behaviour. In the market they position as the responsible, enterprise-ready model provider with strong tool-use and reasoning.

Aims in the market

  • Establish Claude as the default choice for high-stakes and enterprise AI use.
  • Drive agentic workflows and tool use (MCP, Agent SDK) as the next layer on top of models.
  • Own the “safe, steerable, useful” narrative for frontier models.

Offerings

  • ClaudeFamily of models (Sonnet, Opus, Haiku) with long context and tool use.
  • Model Context Protocol (MCP)Open protocol for tools, resources, and prompts; originated by Anthropic, adopted by others.
  • Agent SDK and pluginsLibraries and extension points for building agents and tool integrations.
  • Claude for IDEs and APIsIntegrations (e.g. Claude Code, API) for developers and products.

The move: We can see their move to corner the agent and tool layer with their focus on MCP and the Agent SDK in the form of an open protocol and first-party tooling — so Claude and other clients can share the same tools and context.

Anthropic

OpenAI

OpenAI is the company behind ChatGPT and the GPT family. They dominate mindshare and broad adoption: consumers, prosumers, and developers building on the API. Their position is “default AI” for many use cases.

Aims in the market

  • Maintain ChatGPT as the primary consumer and prosumer AI experience.
  • Grow API and platform usage (plugins, tools, Assistants) for developers and enterprises.
  • Extend into code, search, and multi-modal experiences while staying model-centric.

Offerings

  • ChatGPTConversational AI product with plugins, code interpreter, and browsing.
  • GPT-4 and familyModels available via API and inside ChatGPT.
  • Assistants APIPersistent agents with tools, files, and retrieval.
  • Plugins and toolsEcosystem for third-party tools and integrations.

The move: We can see their move to corner broad adoption and the platform layer with their focus on ChatGPT and the Assistants API in the form of a single surface (chat + tools + files) — so developers and users stay in the OpenAI ecosystem for both usage and distribution.

OpenAI

GitHub

GitHub is the home of code and collaboration. With Copilot they sit where code lives: the repo, the PR, and the editor. Their position is “AI that knows your codebase” and fits into existing Git and GitHub workflows.

Aims in the market

  • Make Copilot the default AI pair programmer for GitHub users.
  • Tie AI value to the repository: code completion, PR summaries, and repo-aware chat.
  • Grow revenue through Copilot subscriptions and enterprise adoption.

Offerings

  • GitHub CopilotCode completion and suggestions in the IDE.
  • Copilot ChatConversational AI in the editor with repo context.
  • Copilot for PRs and docsSummaries, reviews, and documentation tied to GitHub.
  • GitHub-native integrationsActions, Issues, and API for embedding AI in workflows.

The move: We can see their move to corner the “codebase as context” layer with their focus on repo-aware completion and chat in the form of Copilot and GitHub integration — so the natural place for AI-assisted code remains inside GitHub and the editor they support.

GitHub Copilot

Model Context Protocol (MCP)

MCP is an open protocol for connecting AI models to tools, resources, and prompts. It is not a company but a standard — originated by Anthropic and adopted by Cursor, OpenAI, and others — that defines how context and capabilities are exposed to models.

Aims in the market

  • Enable interoperability: one MCP server can serve multiple clients (Claude, Cursor, etc.).
  • Standardise the “context layer”: tools, resources, and prompts as first-class concepts.
  • Let the ecosystem build shared infrastructure (databases, APIs, workflows) without lock-in.

Offerings

  • MCP specificationOpen protocol for servers and clients.
  • ToolsActions the model can invoke (APIs, code, search).
  • ResourcesData sources the model can read (files, docs, DBs).
  • PromptsReusable task templates that clients can run with the model.

The move: We can see the move to corner the connector layer with a focus on standardisation in the form of MCP — so vendors and teams can share tools and context across Cursor, Claude, and other clients instead of building one-off integrations per product.

MCP

Google

Google is a major model provider with Gemini and a large ecosystem (Search, Workspace, Cloud). They position for both consumers and enterprises, with long context, grounding in real-time data, and code-specific products (Gemini Code Assist, Duet) that tie AI to the repo and the pipeline.

Aims in the market

  • Establish Gemini as the default AI for Google users (Search, Workspace, Android) and for enterprises via Vertex.
  • Own the “grounded, up-to-date, multi-modal” space with retrieval and real-time data.
  • Win developer and code workflows through Code Assist, Duet, and Vertex AI integrations.

Offerings

  • GeminiFamily of models with long context, grounding, and multi-modal input/output.
  • Gemini Code Assist / DuetAI for developers in the IDE and in Cloud, with repo and pipeline context.
  • Vertex AIEnterprise platform for models, agents, and RAG with Google infrastructure.
  • Grounding and retrievalReal-time search and data grounding so model outputs stay current and cited.

The move: We can see their move to corner the “grounded, enterprise, and code” layer with their focus on long context and retrieval in the form of Gemini, Vertex, and Code Assist — so the default for many organisations is Google’s stack for both models and context.

Google Gemini

JetBrains

JetBrains is the company behind IntelliJ, PyCharm, WebStorm, and other IDEs. They have a huge installed base of professional developers. With AI Assistant they are layering model-powered completion, chat, and refactoring into the tools developers already use, tying context to the IDE and the project.

Aims in the market

  • Keep the JetBrains IDE the primary tool for professional developers while adding AI seamlessly.
  • Deliver repo and project context to the model within the IDE (no separate “AI app”) so workflows stay intact.
  • Grow AI Assistant adoption and paid usage across the product line.

Offerings

  • AI AssistantInline completion, chat, and refactoring powered by models inside JetBrains IDEs.
  • Full line of IDEsIntelliJ, PyCharm, WebStorm, etc., each with embedded AI and project context.
  • Context from the projectCodebase, structure, and dependencies available to the model in-context.
  • Subscription and licensingAI features as part of JetBrains subscription and licence model.

The move: We can see their move to corner the “incumbent IDE + AI” layer with their focus on in-editor AI and project context in the form of AI Assistant — so developers who already live in JetBrains get AI without switching to a new tool.

JetBrains AI

Windsurf

Windsurf is Codeium’s AI-native code editor, built on VS Code. Like Cursor, it sits at the intersection of the IDE and the model, with a strong focus on agentic workflows, multi-file editing, and repository context. They are a direct competitor in the “AI-first IDE” space.

Aims in the market

  • Own a share of the AI-native IDE market alongside Cursor, with a differentiated UX and model options.
  • Make agentic, multi-step coding (plan, edit, test) the default experience in the editor.
  • Support repo context, extensions, and eventually ecosystem (e.g. rules, MCP) so teams can customise.

Offerings

  • Windsurf IDEVS Code–based editor with deep AI integration and agentic flows.
  • Agentic workflowsMulti-step tasks, planning, and execution with full repo context.
  • Codeium completionCodeium’s completion and chat models available inside Windsurf.
  • Extensions and customisationExtension model and settings for team and project-specific behaviour.

The move: We can see their move to corner the agentic-IDE layer with their focus on multi-step, plan-and-edit workflows in the form of Windsurf — so developers who want an AI-native editor have a clear alternative with a different take on how the AI should behave.

Windsurf

Replit

Replit is an AI-native development environment in the browser: code, run, and deploy in one place. They target education, hobbyists, and rapid prototyping. With Agents and integrated models they are pushing “AI that can edit and run the whole project” as the default experience, with context being the full Replit workspace.

Aims in the market

  • Own the in-browser, zero-friction dev experience for learning and fast iteration.
  • Make “AI that can code, run, and deploy” the norm inside Replit so context is the entire workspace.
  • Grow usage and paid plans through Agents, deployment, and team features.

Offerings

  • Replit environmentBrowser-based IDE with runtimes, deployment, and collaboration.
  • Replit AgentsAI that can plan, edit, run, and debug within the Replit project.
  • Integrated models and toolsModels and tools (browser, shell, etc.) with full workspace context.
  • Education and teamsClassrooms, teams, and deployment as part of the product.

The move: We can see their move to corner the “browser-native, full-stack AI dev” layer with their focus on Agents and the unified workspace in the form of Replit — so the natural place for zero-setup, AI-driven coding is inside Replit’s environment.

Replit

Amazon Web Services (AWS)

AWS is the dominant cloud provider. With Bedrock and CodeWhisperer they are bringing models, agents, and AI-assisted coding to enterprises that already run on AWS. Their position is “AI that fits your cloud, compliance, and code pipeline” with strong emphasis on enterprise control and integration.

Aims in the market

  • Make Bedrock the default way to use frontier and open-weight models in the cloud with enterprise controls.
  • Make CodeWhisperer the default AI pair programmer for teams committed to AWS and existing IDEs.
  • Tie AI value to AWS data, pipelines, and services (RAG, agents, SageMaker) so the ecosystem stays on AWS.

Offerings

  • Amazon BedrockManaged access to multiple models (Claude, Llama, etc.) with guardrails and enterprise features.
  • Amazon CodeWhispererCode completion and chat in the IDE, with repo and AWS context.
  • Agents and RAG on BedrockAgents, knowledge bases, and retrieval built on AWS data and services.
  • SageMaker and ML platformTraining, deployment, and MLOps for custom and open models.

The move: We can see their move to corner the enterprise and cloud AI layer with their focus on Bedrock and CodeWhisperer in the form of managed models and IDE integration — so organisations that standardise on AWS get AI that stays inside their cloud and compliance boundaries.

AWS Bedrock