Monday, July 21, 2025

To MCP or not to MCP?

When we're building automation tools in 2025, I see two main approaches:

  1. Agent + MCP: Point an LLM-powered Agent at MCP servers, give the Agent a detailed description of the task, and let the Agent decide which tools to use to complete the task. For this approach, we can use an existing Agent from agentic frameworks like PydanticAI, OpenAI-Agents, Semantic Kernel, etc., and we can either use an existing MCP server or build a custom MCP server depending on what tools are necessary to complete the range of tasks.
  2. Old school with LLM sprinkles: This is the way we would build it before LLMs: directly script the actions that are needed to complete the task, using APIs and SDKs, and then bring in an LLM for fuzzy decision/analysis points, where we might previously use regular expressions or loving handcrafted if statements.

There's a big obvious benefit to approach #1: we can theoretically give the agent any task that is possible with the tools at its disposal, and the agent can complete that task. So why do I keep writing my tools using approach #2??

  1. Control: I am a bit of a control freak. I like knowing exactly what's going on in a system, figuring out where a bug is happening, and fixing it so that bug never happens again. The more that my tools rely on LLMs for control flow, the less control I have, and that gives me the heebie jeebies. What if the agent only succeeds in the task 90% of the time, as it goes down the wrong path 10% of the time? What if I can't get the agent to execute the task exactly the way I envisioned it? What if it makes a horrible mistake, and I am blamed for its incompetence?

  2. Accuracy: Very related to the last point -- the more LLM calls are added to a system, the harder it is to guarantee accuracy. The impossibility of high accuracy from multi-LLM workflows is discussed in detail in this blog post from an agent developer.
  3. Cost: The MCP-powered approach requires far more tokens, and thus more cost and more energy consumption, all things that I'd like to reduce. The agent requires tokens for the list of MCP servers and tool definitions, and then the agent requires tokens for every additional step it decides to take. In my experience, agents generally do not take the most efficient path to a solution. For example, since they don't know exactly what context they'll need or where the answer is for a question, they prefer to over-research than under-research. An agent can often accomplish a task eventually, but at what cost? How many tokens did it have to use, to get to the same path that I could have hand-coded?

My two most recent "agents" both use approach #2, hand-coded automation with a single call to an LLM where it is most needed:

Personal LinkedIn Agent

github.com/pamelafox/personal-linkedin-agent

This tool automates the triaging of my inbound connection requests, by automating the browser with Playwright to open my account, check requests, open profiles, and click Accept/Ignore as needed. It uses the LLM to decide whether a connection meets my personal criteria ("are they technical? accept!"), and that's it.

When I first started coding the agent, I did try using approach #1 with the Playwright MCP server, but it required so many tokens that it went over the low capacity of the LLM I was using at the time. I wanted my solution to work for models with low rate limits, so I switched over to the Python playwright package instead.

GitHub Repo Maintainer Agent

github.com/pamelafox/github-repo-maintainer-agent

The goal of this tool is to automate the maintenance of my hundreds of repositories, handling upgrades like Python package dependencies and tool modernization. Currently, the tool uses the GitHub API to check my repositories for failed Dependant PRs, analyzes the log failures using an LLM call, creates issues referencing those PRs, and assigns those issues to the GitHub Copilot Coding agent.

That's when the real agentic part of this tool happens, since the GitHub Copilot Coding agent does use an MCP-based approach to resolve the issue. Those Copilot agent sessions can be quite long, and often unsuccessful, so my hope is that I can improve my tool's LLM calls to give that agent additional context and reduce its costs.

I am tempted to add in an MCP-based option to this tool, to help me with more general maintenance tasks, but I am not sure I am ready to give up control... are you?

No comments: