How it works, how to set it up, and when it makes sense for your workflow
Deep Tech Research · 28 Feb 2026 · R1
ADOPT Cursor subagents are production-ready and already in use. The agent you're talking to can spawn subagents via the Task tool (Explore, Shell, General-purpose). You don't need to "set up" the built-in ones — they run automatically when appropriate. Custom subagents are optional: add Markdown files under .cursor/agents/ with YAML frontmatter (name, description, model) to get specialized workers (verifier, debugger, test-runner). For Eric's workflows — /deep* research, project checkout, shipios — subagents already make sense: use them for parallel exploration and bounded tasks. The main setup work is (1) ensuring a non-Auto model so subagents trigger, and (2) optionally adding 2–3 custom agents for verification or shell-heavy work.
Cursor subagents are specialized AI workers the main agent can delegate to. Each runs in its own context window, does a defined job, and returns a result. Cursor ships three built-in subagents (Explore for codebase search, Bash for shell commands, Browser for MCP-driven browser automation) that the agent uses automatically when tasks fit. You can add custom subagents by dropping .cursor/agents/*.md files with YAML frontmatter and a prompt. Use them when you need context isolation, parallel workstreams, or a dedicated "verifier" or "debugger" that doesn't bloat the main chat.
| Dimension | Rating | Evidence |
|---|---|---|
| Maturity | Production-ready | Shipped in Cursor 2.4 (Jan 2025), docs and forum active; CLI support as of Jan 20261 |
| Documentation | Excellent | Official Subagents page: context isolation, foreground/background, built-ins, custom file format, when to use vs skills2 |
| Community | Established | Forum thread 4.9k views, 34 likes; feature requests and bug reports (model routing, recursion) documented1,3 |
| Adoption | Early majority | Default subagents run automatically; power users define verifier/debugger/council-style agents2,4 |
| Use case | Fit | Why |
|---|---|---|
| PCRM / Donna daily ops | Strong | Explore subagent already used for codebase search; main agent stays focused on state, memory, commands2 |
| /deep* research (market, tech, product) | Strong | Parallel subagents for multi-source research; context isolation keeps main report narrative clean2,5 |
| Project checkout / diff-based billing | Strong | Concurrent blocks (e.g. /assist + /deepmarketresearch) map to parallel subagents; doc notes "when multiple sessions run simultaneously"5 |
| Shipios / multi-layer app build | Strong | shipios command already specifies "Use subagents to parallelize" (UI, Data, Assets)5 |
| Sourcy eval / activation bot | N/A | Bot logic lives in OpenClaw + prompts; Cursor subagents are for Cursor-side research/code, not WA bot runtime |
Hub-and-spoke: one parent agent (your Composer/Agent session) and N subagents. The parent decides when to delegate, passes a prompt and optional params (e.g. subagent_type, readonly), and gets back a single final message (or runs in background). Subagents do not see the parent's full conversation; they get only what the parent puts in the prompt.2
[ You ] <--> [ Parent Agent ] <-- Task --> [ Subagent Explore ] (codebase search)
| <-- Task --> [ Subagent Shell ] (bash)
| <-- Task --> [ Custom verifier ] (.cursor/agents/verifier.md)
+-- same context, sees subagent results only when they return
Built-in subagents (Explore, Bash, Browser) are invoked automatically when the agent's task matches — no config. Custom subagents are discovered from .cursor/agents/ (or ~/.cursor/agents/); the agent reads description to decide when to call them, or you invoke explicitly with /verifier or "use the verifier subagent."2
Task tool. The parent calls a Task (or equivalent) with a description, optional subagent type, and options (e.g. background, readonly). The runtime starts a separate agent process with that prompt. Foreground = block until result; background = return immediately, subagent continues.2
Context isolation. Each subagent has its own context window. No access to parent chat history. So: long explorations or noisy shell output stay in the subagent; the parent only gets the summarized or final answer. That's why Explore/Bash/Browser are subagents — they generate large intermediate output.2
Custom subagent file format. Markdown with YAML frontmatter: name, description (when to use), model (inherit | fast | specific), readonly, is_background. Project dir .cursor/agents/ overrides user dir; .cursor/ overrides .claude/ and .codex/ for same name.2
Subagents don't receive User Rules. Each custom subagent's behavior is defined only by its own prompt in the file — not by Cursor user rules. Put any rules of conduct in the subagent's markdown body.4
| Dimension | Cursor subagents | Claude Code Agent Teams | Skills only |
|---|---|---|---|
| Structure | Hub–spoke, result back | Peer-to-peer, messaging | Single shot, no child context |
| Context | Isolated per subagent | Persistent per teammate | Same as main |
| Best for | Parallel + context-heavy (explore, bash, verify) | Multi-step collaboration | Single-purpose (changelog, format) |
| Reliability | Stable in Cursor | Message delivery bugs (VS Code, tmux) | N/A |
Eric's prior conclusion holds: for most work, Cursor Task subagents are the daily workhorse; Agent Teams are for when teammates must talk to each other (and accept current bugs).5
Subagents run as separate agent processes. Built-ins use a faster model for Explore to run many parallel searches cheaply. Token use: each subagent has its own context; N parallel subagents ≈ N× context cost. Background subagents write output under ~/.cursor/subagents/ for inspection. Resumable by agent ID so long-running work can be continued.2
model in some builds — subagents inherit parent model. Built-in Explore/Bash/Browser model not user-configurable today.1,3model in frontmatter. Usage-based plans respect configured model.2readonly; subagents inherit parent tools (including MCP). No per-subagent tool allowlist in YAML.2,4Subagents add startup and duplicate context. Use them when the benefit (isolation, parallelism, or cheaper/faster model for Explore) outweighs the overhead. For quick single-step tasks, the main agent is often faster. Running many subagents in parallel multiplies token use.2
If a subagent fails, it returns an error to the parent; the parent can retry, resume, or handle. YAML syntax errors (missing colons, wrong indentation) make custom agents invisible — validate frontmatter. Practitioner reports (pre-2.4) mentioned inconsistent activation with certain configs; official docs and 2.4+ are the source of truth.2,6
Forum (Cursor 2.4). Positive reception; questions about overriding built-ins (not yet), assigning model to custom subagents (yes, in frontmatter), and /council-style multi-agent use. Colin (Cursor): built-in Explore uses a faster model and has "proved out really well" in testing.1
Feature requests. Recursive subagents, CLI support for Task/subagents, and direct model ID in Task (not just "fast") are requested for advanced workflows.3
Sentiment. Cautiously optimistic. Feature is used and documented; limitations (flat hierarchy, model routing) are known and requested for improvement.
Hype. "Subagents lead to faster execution and better context usage" (announcement).
Reality. True for context-heavy and parallel workloads (Explore, Bash, verification). Overkill for simple single-step tasks; skills or main agent are enough. Model and recursion limits are real; roadmap items, not blockers for current setup.
Gap. Built-in subagents can't be overridden (e.g. force Explore to use Gemini). Custom subagents can set model but inheritance bugs have been reported.
Cursor 2.4+ (subagents shipped Jan 2025). Usage-based plan or legacy with Max Mode if you want custom subagent models. No extra accounts.
Do nothing. Agent uses Explore, Bash, and Browser when the task fits. Ensure you're not on "Auto" if subagents don't trigger (fallback to Sonnet/Opus until fix).
Create .cursor/agents/verifier.md in the project:
--- name: verifier description: Validates completed work. Use after tasks are marked done to confirm implementations are functional. model: fast --- You are a skeptical validator. Your job is to verify that work claimed as complete actually works. When invoked: 1. Identify what was claimed to be completed 2. Check that the implementation exists and is functional 3. Run relevant tests or verification steps 4. Look for edge cases that may have been missed Report: what was verified and passed; what was claimed but incomplete or broken; specific issues to address.
Invoke explicitly: /verifier confirm the auth flow is complete or ask in natural language: "Use the verifier subagent to confirm the auth flow is complete."2
Same pattern: .cursor/agents/debugger.md, .cursor/agents/test-runner.md with distinct name and description. Keep prompts focused; the description is what the agent uses to choose when to delegate.2
.cursor/agents/ overrides user-level; same name in .cursor/ wins over .claude/ or .codex/.ADOPT Subagents are already part of the workflow; no migration. Recommended next steps:
.cursor/agents/verifier.md (and optionally debugger, test-runner) for post-task verification and shell-heavy runs. 1–2 hours.What would change the verdict: If Cursor removed or severely restricted the Task tool, reassess. If model routing and recursion land, consider more custom agents and heavier parallelization.