What I like about MCP servers: they give me lots of great tools that can make my agents more powerful, with very little work on my side. 🎉
What I don't like about MCP servers: they give me TOO many tools! I usually only need a handful of tools for a task, but a server can expose dozens. 😿
The problems with too many tools:
- LLM confusion. The LLM will be presented with the tool definition for every single tool in the server, and it needs to decide which tool (if any) is the best for the job. That's a hard decision for an LLM - it's always better to make it easier for the LLM by narrowing the tool list.
- Increased tokens. The tool call definitions require more tokens, which can cost more money, increase latency, and potentially even go over the context window limit of the model.
- Destructive actions. A server may include tools that are read-only, just sending down data to serve as context, but many servers expose tools that do write operations, like the GitHub MCP server's tools for creating issues, closing issues, pushing branches, and many more. It's possible your task requires some of those write ops, but you generally want to be very explicit about whether an agent is allowed to take action that can actually change something about your accounts and environments. Otherwise, you can be in for a nasty surprise when the agent took actions that you weren't expecting. (Ask me how I know...)
Fortunately, there is almost always a way to configure agents to only allow a subset of the tools from an MCP server. In this blog post, I'll share ways to filter tools in my favorite agentic coder, GitHub Copilot in VS Code, plus two popular AI agent frameworks, Langchain v1 and Pydantic AI.
Agentic coding with GitHub Copilot in VS Code
Global configuration
When you are using agent mode in VS Code, configure the tools by selecting the gear icon near the chat input window.
That will pop-up a window showing all your available tools, coming from both installed MCP servers and VS Code extensions. You can select or de-select at the extension/server level, the toolset level, and the individual tool level.
When you configure that tool selection, that affects all the interactions in Agent mode.
What if you want different tool subsets for different kinds of tasks - like when you're planning a feature versus fixing a bug?
Custom modes
That's where custom chat modes come in. We define a modename.mode.md
file that provides a prompt, a preferred model, and an allowed list of tools.
For example, here's the start of my fixer.mode.md file for fixing issues:
---
description: 'Fix and verify issues in app'
model: GPT-5
tools: ['extensions', 'codebase', 'usages', 'vscodeAPI', 'problems', 'changes', 'testFailure', 'fetch', 'findTestFiles', 'searchResults', 'githubRepo', 'runTests', 'runCommands', 'runTasks', 'editFiles', 'runNotebooks', 'search', 'new', 'create_pull_request', 'get_issue', 'get_issue_comments', 'get-library-docs', 'playwright', 'pylance mcp server']
---
# Fixer Mode Instructions
You are in fixer mode. When given an issue to fix, follow these steps:
1. **Gather context**: Read error messages/stack traces/related code. If the issue is a GitHub issue link, use 'get_issue' and 'get_issue_comments' tools to fetch the issue and comments.
2. **Make targeted fix**: Make minimal changes to fix the issue. Do not fix any issues that weren't identified. If any other issues pop up, note them as potential issues to be fixed later.
3. **Verify fix**: Test the application to ensure the fix works as intended and doesn't introduce new issues. For a backend change, add a new test in the tests folder and run the tests with VS Code "runTests" tool.
VS Code detects all of the custom chat modes in my project, and then lists custom modes as options in the mode picker in the chat window:
When I'm in that mode, I have full confidence that the agent will only use the tools from that list, and I often customize the mode prompt with additional guidance on using the allowed tools, for optimal results.
Python AI agent frameworks
At this point, most AI agent frameworks have built-in support for pointing an agent at an MCP server, giving that agent the ability to use the tools from the server. A growing number of the frameworks also make it possible to filter the list of tools from the server. If you use a framework that doesn't yet make it possible, file an issue and let them know it's important to you. Let's look at two examples from popular frameworks.
Langchain v1
Langchain v1 is the latest major version of Langchain, and it is a very agent-centric SDK. It's still in alpha testing as of September 2025, so we need to explicitly install the alpha version of the langchain
package in order to use it.
To use a langchain agent with MCP, we install the langchain_mcp_adapters
package, and then construct an MCP client for the server.
The client below connects to the GitHub MCP server using a fine-grained personal access token:
mcp_client = MultiServerMCPClient(
{
"github": {
"url": "https://api.githubcopilot.com/mcp/",
"transport": "streamable_http",
"headers": {"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN', '')}"},
}
}
)
Note that I configured that access token with the minimal access needed for the desired tools, which is another best practice to avoid unintended actions.
Then I fetch the list of tools from the server, and filter that list to only keep the 4 tools needed for the task:
tools = await mcp_client.get_tools()
allowed_tool_names = ("list_issues", "search_code", "search_issues", "search_pull_requests")
filtered_tools = [t for t in tools if t.name in allowed_tool_names]
Finally, I create an agent with my prompt and filtered tool list:
agent = create_agent(base_model, prompt=prompt, tools=filtered_tools)
For a full example, see langchainv1_mcp_github.py. If you're using Langchain v1, you may also be interested in their human-in-the-loop middleware, which can prompt human confirmation only when certain tools are called. That way, you could give your agent access to write operations, but ensure that a human approved each of those actions.
Pydantic AI
Pydantic AI is an agents framework from the Pydantic team that puts a big focus on type safety and observability.
Pydantic AI includes MCP support out of the box, so we only need to install pydantic-ai
. We configure the target MCP server with the URL and authorization headers:
server = MCPServerStreamableHTTP(
url="https://api.githubcopilot.com/mcp/",
headers={"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN', '')}"}
)
Next we create a FilteredToolset on the MCP server by defining a lambda function that filters by name:
allowed_tool_names = ("list_issues", "search_code", "search_issues", "search_pull_requests")
filtered_tools = server.filtered(
lambda ctx, tool: tool.name in allowed_tool_names)
Finally, we point the agent at the filtered tool set:
agent = Agent(model, system_prompt=prompt, toolsets=[filtered_tools])
For a full example, see pydantic_mcp_github.py.
As you can see, we can achieve the same tool filtering in multiple frameworks with a similar approach.
No comments:
Post a Comment