Beta Space StudioBeta Space Studio logo

MCP Wasn't Enough, So We Built a CLI: Why the Command Line Is Still the Right Choice for AI Agents

Said Sürücü
Said Sürücü
7 min read
MCP Wasn't Enough, So We Built a CLI: Why the Command Line Is Still the Right Choice for AI Agents

When we released YargıMCP (Yargı — Turkish for "judiciary"), a small revolution happened in the Turkish legal world. 587 GitHub stars, 99 forks, 13 different courts and institutions, 19 tools. A system that works everywhere from Claude Desktop to Gemini CLI, reducing lawyers' precedent case search time from hours to minutes.

So why did we build a CLI too?

Because MCP isn't the only way for AI to talk to the outside world. And in some scenarios, it's not even the best way.

The Hidden Cost of MCP

Model Context Protocol is a brilliant idea: a standard bridge between AI models and tools. Set it up once, run it everywhere. We believed in this vision and built YargıMCP on exactly this philosophy.

But as we used MCP in the real world, in real workflows, we noticed some friction points.

First, context window cost. When an MCP server connects to an AI agent, it loads all tool schemas into the agent's context window. In YargıMCP, we optimized this by 61.8%, reducing it from 14,061 tokens to 5,369. But that's still tokens spent before the agent has done any actual work. When you connect three or four MCP servers to an agent, a significant portion of the context window is filled with just "tool definitions." The remaining space for the agent's actual job — reasoning — shrinks.

Second, chaining difficulty. The most powerful aspect of Unix philosophy is that small tools can pipe into each other. Like grep | sort | uniq -c. MCP tools can't be piped together. Each tool call is independent; you can't directly pass one result to the next.

Third, platform dependency. MCP inherently requires a client-server architecture. It doesn't work without an MCP client like Claude Desktop, 5ire, or similar. This makes MCP dependent on specific platforms.

CLI: The Native Language of AI

At this point, we stepped back and asked a fundamental question: What is the most natural interface for AI agents?

The answer turned out to be much older than we expected: the command line.

Large language models were trained on massive amounts of data. A large portion of this data consists of GitHub repos, Stack Overflow answers, technical documentation, and terminal commands. An AI model already "knows" tools like git, grep, curl, jq. It has learned from training data how these commands work, what parameters they take, and what their outputs look like.

No model has ever seen MCP tools during training. Every MCP server's schema must be discovered at runtime. That means extra tokens, extra latency, and extra error potential.

CLI offers the exact opposite equation: Zero schema cost. The model already knows the command. It just needs a --help call to learn specific parameters. One of 2026's most striking technical discussions is exactly about this: Peter Steinberger, creator of OpenClaw, wrote approximately 10 custom CLIs for his autonomous agent. And OpenAI hired him precisely because of this approach.

Yargı CLI: What, Why, How?

Yargı CLI is the command-line counterpart of YargıMCP. It provides access to the same Turkish legal databases (Court of Cassation, Council of State, Local Courts, Regional Courts of Appeal, Appeal in the Interest of Law) but through a pure command-line interface instead of the MCP protocol.

Its design philosophy is built on five principles:

JSON-only output. Every command writes structured JSON to stdout. Designed not for human reading, but for machine processing.

Pipe-friendly. Can be chained with Unix pipes. Works with jq, xargs, or any Unix tool.

Rich --help. Parameter descriptions, search operators, output schemas, and examples are embedded directly in the help text. This way, an AI agent can discover the CLI on its own without needing external documentation.

Zero authentication. Install and run. No API keys, tokens, or OAuth flows.

Stateless. Every call is independent. No sessions, no cookies, no server connections.

How Does It Work in the Real World?

A typical workflow for an AI agent with Yargı CLI looks like this:

# 1. Search for a decision
yargi bedesten search "property rights" -c YARGITAYKARARI

# 2. Extract the document ID from the result
yargi bedesten search "property rights" \
  | jq -r '.decisions[0].documentId'

# 3. Get the full decision text
yargi bedesten search "property rights" \
  | jq -r '.decisions[0].documentId' \
  | xargs yargi bedesten doc

# 4. List all case numbers
yargi bedesten search "workplace accident" -c YARGITAYKARARI \
  | jq '[.decisions[] | .esasNo]'

Notice the elegance of this flow: each step takes the output of the previous one and passes it to the next. Intermediate results can be inspected, each step can be tested independently. An AI agent already knows this pattern because it has seen thousands of similar pipe chains in its training data.

To do the same thing with MCP, the agent would first need to load the tool schema, then make a separate structured tool call for each step, hold the results in its own memory, and manually pass them to the next call.

MCP vs CLI: When to Use Which?

It wouldn't be right to make a sharp "one is good, the other is bad" distinction here. They are different tools optimized for different scenarios.

Where MCP excels: End-user applications like Claude Desktop, interaction between non-technical users and AI, clients with visual interfaces. When a lawyer says "find Court of Cassation decisions about rent increases" in Claude Desktop, MCP runs in the background and the user doesn't deal with any technical details. This experience cannot be provided with CLI.

Where CLI excels: Autonomous agents, CI/CD pipelines, RAG systems, batch data processing, chained integration with other tools. If a coding agent (Claude Code, Codex, Gemini CLI) needs to query Turkish legal databases, running yargi bedesten search directly is far more efficient than spinning up an MCP server.

We built both because both are needed. YargıMCP is the interface for end-user applications. Yargı CLI is the interface for developers and agents.

The Bigger Picture: CLI Renaissance

Yargı CLI is part of a larger trend. In 2025-2026, a quiet but powerful CLI renaissance is happening in the software world.

Autonomous coding agents like Claude Code, Codex CLI, and Gemini CLI live in the terminal. General-purpose autonomous agents like OpenClaw use CLI tools as "skills." The custom CLIs Peter Steinberger wrote for OpenClaw proved how powerful this approach is in practice.

The underlying logic is simple: the terminal is a 50-year-old interface. Proven, tested, standardized. Works on every operating system. Fundamental problems like permission models, error handling, and process control have long been solved. And most importantly, AI models deeply know this interface from their training data.

MCP added a new and valuable layer to this ecosystem. But it didn't replace CLI. On the contrary, the best AI systems use both together: MCP as the user-facing front, CLI as the power running under the hood.

Getting Started

If you want to try Yargı CLI:

# Node.js >= 24 required
npm install -g @saidsrc/yargi

# Make your first search
yargi bedesten search "expropriation" -c YARGITAYKARARI

# Filter by date range
yargi bedesten search "workplace accident" --date-start 2024-01-01 --date-end 2024-12-31

# Get the full text of a decision
yargi bedesten doc 1123588300 | jq -r '.markdownContent'

The project is open source: github.com/saidsurucu/yargi-cli

YargıMCP is here: github.com/saidsurucu/yargi-mcp

Final Words

In the AI world, new protocols, new standards, new frameworks emerge every day. Each tries to address the shortcomings of the previous one. This is a good thing.

But sometimes the most powerful solution isn't the newest one. Sometimes a 50-year-old interface can still be the most efficient, most reliable, and most natural way for AI to talk to the outside world.

At Beta Space Studio, we learned this: choosing the right tool for the right job is always more valuable than being committed to a single tool. MCP and CLI aren't competitors — they're complementary. Using both together is the key to making AI solutions both user-friendly and developer-friendly.

Ultimately, it's not about advocating for a protocol or an interface. It's about enabling people (and agents) to access information in the fastest, most efficient way possible.

That's exactly what Yargı CLI is built to do.


If you're curious about CLI and MCP integration in your AI projects, reach out to us at hello@betaspacestudio.com.

MCP Wasn't Enough, So We Built a CLI: Why the Command Line Is Still the Right Choice for AI Agents | Beta Space Studio