Contents
The direct answer
MCP (Model Context Protocol) is an open specification that lets an AI host invoke external tools over a structured JSON-RPC channel, typically local stdio. For code review, that single change reshapes the UX: instead of copy-pasting code into a chat window, your host LLM calls a review tool directly, streams the result inline, and can layer its own reasoning on top in the same turn.
Joint Chiefs ships as an MCP server exposing a single tool — joint_chiefs_review — that any MCP-aware host can call. The host provides the code and optional surrounding context; the server runs the full multi-model debate; the consensus summary streams back through the transport. The review stops being a separate chat session and starts being an operation the host can perform mid-reasoning.
The shift matters less because it is new and more because it aligns incentives correctly. A tool the host can call is a tool the host will call — without the user being the integration layer.
What MCP is
MCP is a protocol for connecting AI hosts to external capabilities. It defines three primitives: tools (functions the host can invoke), resources (data the host can read), and prompts (templates the host can reuse). Communication runs over JSON-RPC, and the dominant transport is local stdio — the MCP client spawns the server as a subprocess and talks to it through standard input and output.
The protocol is intentionally small. There are no networking requirements, no bespoke authentication schemes, and no service-discovery layer. A server declares its tools on startup, the host reads the declaration, and from that point on the host can call any declared tool with structured arguments and receive structured results. The protocol is implementation-agnostic — servers exist in Swift, TypeScript, Python, Go, and more, and any MCP-aware host can talk to any of them.
What makes MCP a standard rather than one-more-integration-layer is that the primitives map onto what LLMs already do. Tools map onto function calling. Resources map onto retrieval context. Prompts map onto system-message templates. The protocol formalizes patterns that hosts were already implementing ad hoc, and it makes the server side portable across hosts.
Why stdio-only matters for security
The stdio-only transport choice is worth pausing on because it is the foundation of every security property a code-review MCP server can have.
An stdio server has no listener. It does not bind to a port, it does not accept network connections, and it has no route for traffic that the parent process did not initiate. The MCP client owns the server's stdin and stdout, and everything the server does happens inside that duplex pipe. From an attack-surface standpoint, the server is a function the client is calling, not a service on the machine.
That single property is what makes it reasonable to send source code through a review tool. There is no shared listener someone else could reach. There is no authentication layer to misconfigure because there are no other callers to authenticate. The trust boundary is the MCP client process itself — if you trust the client to hold your API keys and read your filesystem, you have already granted everything the review tool needs.
The corollary matters too: because the security model depends on the client, the server cannot add guarantees a client does not honor. An MCP server can refuse to log sensitive payloads, scrub inputs, or redact secrets, but it cannot protect against a compromised or hostile client. This is the right distribution of responsibility for a local tool and it maps cleanly onto the way developers already reason about subprocesses.
The UX shift: paste-and-wait to inline
Before MCP, code review with a second model looked like this. You finished a chunk of code, copied it, switched windows or tabs, pasted into a separate chat, waited for the response, read it, copied the parts that mattered back, and resumed work. Every step was a context switch the human had to make. The review tool did not know about the project, the file, or the question you were actually trying to answer — only the bytes you pasted.
With MCP, the same workflow looks different. The host LLM — the one you are already chatting with — decides to call the review tool, passes the diff and any surrounding context it already has, streams the result back into the same turn, and can reason about the result before you read it. You did not switch windows. You did not lose the thread.
This is not a small UX improvement. It changes when review happens. Paste-and-wait is friction that gets skipped on small changes because the overhead exceeds the value. Inline tool-call has almost no per-invocation overhead, which means it gets used for changes that would have gone unreviewed. The ceiling on review frequency stops being the user's tolerance for context-switching.
What tool-call review specifically enables
The less-obvious win from MCP is context threading. A standalone review CLI sees only what you pass on the command line. The host LLM has already read adjacent files, followed references, and built a working model of the codebase. When the host calls a review tool, it can forward that context along with the diff — the function signature two files up, the test that depends on this behavior, the PR description explaining what the change is supposed to do.
A reviewer with more context produces better findings. A reviewer that flags a method as "probably not thread-safe" is less useful than a reviewer that flags it as "not thread-safe — the caller two frames up holds a lock, but this method releases and re-acquires it inside the loop." The first version of that finding comes from reading the diff. The second comes from the host forwarding the caller's code along with the diff because it already had that code in its working context.
Tool-call review closes the loop differently. The host asks the question, the tool answers with structured findings, the host reasons about whether to surface each finding, and the user sees the ones the host thinks are relevant. The review tool is not trying to guess your intent — it is answering a specific question the host asked on your behalf.
How Joint Chiefs exposes the tool
Joint Chiefs' MCP server declares a single tool, joint_chiefs_review. The argument schema is small: the code to review, optional surrounding context, and optional overrides for strategy (consensus mode, provider weights, round count). The host calls it with structured arguments and the server does the rest.
Internally, the tool runs the full hub-and-spoke debate. Spoke providers — OpenAI, Gemini, Grok, plus optional Ollama — review in parallel. Claude moderates up to five rounds with the adaptive early break when positions converge. Findings are anonymized before the final synthesis so the moderator judges arguments rather than brands. The consensus summary streams back through the MCP transport the same way any other tool result would.
A single tool rather than many is deliberate. The host should not need to pick among joint_chiefs_security_review, joint_chiefs_style_review, joint_chiefs_logic_review. Review is a single operation with a configurable strategy, and the strategy lives in arguments rather than in the tool name. A smaller tool surface is a smaller thing for the host to reason about and a smaller thing for you to configure.
Where this fits in the broader trend
Specialized MCP servers are starting to form a layer of the AI stack that did not exist a year ago. A server exposes a single capability — code review, database introspection, file-system search, a specific API — and any MCP-aware host can call it. The hosts stay general; the capabilities stay narrow; the protocol glues them together.
The analogy to Unix pipes is obvious and imperfect. Pipes connect programs that share a byte stream. MCP connects an LLM host to a tool that returns structured results. But the underlying move — small, composable, portable tools over a stable local protocol — is the same bet. The bet works to the extent that servers stay focused and hosts stay dumb about implementation details, both of which the MCP specification encourages.
For code review specifically, the practical consequence is that the "which AI code review tool do I use" question is losing its coupling to "which AI client do I use." Your host is whatever host you prefer. Your review server is whichever review server fits your work. The layers compose. That is the shape of the ecosystem MCP is pushing toward, and it is a healthier shape than the alternative where every review tool is a plugin for one specific vendor.
Key takeaways
- MCP is a small protocol — tools, resources, prompts over JSON-RPC — that formalizes patterns AI hosts were already implementing ad hoc.
- Stdio-only transport means the server has no listener of its own. Every security property flows from the client owning the subprocess.
- Tool-call review replaces paste-and-wait with inline invocation. Review happens more often because the overhead is gone.
- Context threading is the less-obvious win: the host forwards surrounding context the review tool could not have seen on its own.
- Joint Chiefs exposes a single tool —
joint_chiefs_review— that any MCP-aware host can call. Strategy lives in arguments, not in tool names.
Frequently asked questions
What is MCP in one sentence?
MCP (Model Context Protocol) is an open specification that lets an AI host invoke external tools, read resources, and use prompt templates over JSON-RPC, typically over local stdio, so the host LLM can extend its capabilities without the user copy-pasting data between apps.
Why does stdio-only transport matter for code review?
Stdio is a local-only channel owned by the client process. The server has no network listener — it only talks to the parent that spawned it. Every security assumption flows from the client being trustworthy, not from the network. No open port, no auth layer, no ambient credentials. It is the right default for a tool handling source code.
How is tool-call review different from pasting code into a chat?
Paste-and-wait forces the user to be the integration layer: copy, switch, paste, wait, copy back. Tool-call review lets the host LLM invoke the tool directly with the code it is already holding, stream the result inline, and reason about it in the same turn. The host can also forward surrounding file context a standalone CLI could not see.
What does Joint Chiefs expose over MCP?
A single tool: joint_chiefs_review. Arguments are the code to review, optional context, and optional strategy overrides. The tool runs the full hub-and-spoke debate with anonymized synthesis and streams the consensus back through the transport.
Which MCP hosts can call Joint Chiefs?
Any MCP-aware client. MCP is an open protocol and the Joint Chiefs server is implementation-neutral — if the host speaks MCP and can spawn an stdio subprocess, it can call joint_chiefs_review. That is the point of standing up a tool on the protocol.
Is MCP replacing REST APIs for AI tooling?
Not replacing — layering. The LLM providers that back Joint Chiefs still expose REST endpoints. MCP sits between the user's host and those endpoints, giving the host a structured way to call local tools that fan out to whichever APIs are needed. The two layers solve different problems and coexist.