Monday, March 16, 2026

Do stricter MCP tool schemas increase agent reliability?

MCP servers contain tools, and each tool is described by its name, description, input parameters, and return type. When an agent is calling a tool, it formulates its call based on only that metadata; it does not know anything about the internals of a tool. For my PyAI talk last week, I investigated this hypothesis:

If we use stricter types for MCP tool schemas, then agents calling those tools will be more successful.

This was a hypothesis based on my personal experience over the last year of developing with agents and MCP servers, where I'd started with MCP servers with very minimal schemas, witnessed agents failing to call them correctly, and then iterated on the schemas to improve tool-calling success. I thought for sure that my hypothesis would be validated with flying colors. Let's see what I discovered instead...

Table of contents:
  1. A basic MCP tool and schema
  2. Annotating parameters with descriptions
  3. Constraining parameters with types
  4. Setting up evaluations
  5. Evaluation results: category
  6. Evaluation results: date
  7. Cross-model evaluations
  8. Impact of reasoning effort
  9. Comparing agent frameworks
  10. Takeaways

A basic MCP tool and schema

For this experiment, I built an MCP server that can add expenses to a database. My add_expense tool needs four pieces of information:

  • date: The date that the expense was incurred
  • amount: The amount of the expense
  • category: The category of the expense
  • description: A free-form description of the expense

This is what a first attempt at the tool might look like, using the Python FastMCP framework, and a Python type annotation for each parameter:

@mcp.tool
async def add_expense(
    expense_date: str,
    amount: float,
    category: str,
    description: str,
):
    """Add a new expense."""
    ...

See full code in expenses_mcp.py.

When FastMCP generates the schema based on that function signature, it produces this JSON schema:

{
  "name": "add_expense",
  "description": "Add a new expense.",
  "inputSchema": {
    "properties": {
      "expense_date": {"type": "string"},
      "amount": {"type": "number"},
      "category": {"type": "string"},
      "description": {"type": "string"}
    },
    "required": ["expense_date", "amount",
      "category", "description"],
    "type": "object"
  }
}

That schema is what agents see - nothing else! The name is the function name, the description is the function docstring, and the inputSchema describes each parameter based on its type annotation, and marks all of them as required, since none of them are marked as optional.

We've done only the bare minimum for that tool schema, assigning types for each parameter. But most of those types are bare strings, so the LLM can decide what to pass into each string. As we know, LLMs can be very creative, and can vary wildly in their choices. For example, this is a word cloud of the category values across 83 tool calls:

Word cloud of category strings

Now let's explore different ways to enhance the generated schemas, and evaluate whether those better schemas improve agent success.

Annotating parameters with descriptions

The first step that I always recommend to developers is to annotate each parameter with a description. Any LLM that is using the tool will see the description, and will alter its behavior based on the guidance inside. (We are basically doing prompt engineering inside our function signatures!) To add a description with FastMCP, wrap the type annotation in typing.Annotated and pass in a pydantic.Field with a description. This tool definition adds a description to just the category field:

from pydantic import Field
from typing import Annotated

@mcp.tool
async def add_expense_cat_b(
    expense_date: date,
    amount: float,
    category: Annotated[
        str,
        Field(
            description="Must be one of: Food & drink, Transit and Fuel, Media & streaming, Apparel and Beauty, "
            "Electronics & tech, Home and office, ..."
        ),
    ],
    description: str,
):

With that change, the generated JSON schema now includes the description:

"category": {
    "type": "string",
    "description": "Must be one of:
      Food & drink,
      Transit and Fuel,
      Media & streaming,
      Apparel and Beauty,
      Electronics & tech,
      Home and office, ..."
}

The description can be quite long - and in fact, my actual description became a lot longer to guide the LLM when faced with ambiguous cases:


Choose the closest category for the expense.
Do not ask follow-up questions just to disambiguate the category;
pick the best fit using the description and common sense.
If truly unclear, use Misc.

Heuristics: Food & drink = meals, groceries, coffee, restaurants, snacks;
Transit and Fuel = rideshare, taxi, gas, parking, public transit, tolls;
Media & streaming = movies, concerts, subscriptions, streaming, games, tickets;
Apparel and Beauty = clothing, shoes, cosmetics, haircuts, personal care;
Electronics & tech = devices, gadgets, accessories, apps, software;
Home and office = furniture, supplies, housewares, decor, cleaning;
Health & Fitness = gym, medical, wellness, supplements, pharmacy;
Arts and hobbies = crafts, sports equipment, creative supplies, lessons;
Fees & services = banking, professional services, insurance, subscriptions;
Misc = anything that does not fit well into other categories.

However, the longer the description, the higher the token cost, so you don't get a long description for free!

Constraining parameters with types

See how we're asking the LLM to constrain itself to a single option in a pre-determined list of options? In this case, we can enforce that in the schema, using enum types. With FastMCP, we can specify that in two different ways. The first option is to type the parameter as a Literal:

from typing import Literal

@mcp.tool
async def add_expense_cat_b(
    expense_date: date,
    amount: float,
    category: Literal[
      "Food & drink", "Transit and Fuel", "Media & streaming", ...
    ],
    description: str,
):

The second option is to type the parameter as an Enum:

from enum import Enum

class Category(Enum):
    FOOD_AND_DRINK = "Food & drink"
    TRANSIT_AND_FUEL = "Transit and Fuel"
    ...

@mcp.tool
async def add_expense_cat_b(
    expense_date: date,
    amount: float,
    category: Category,
    description: str,
):

Both of those options result in the same generated JSON schema for the category parameter:

"category": {
    "type": "string",
    "enum": [
        "Food & drink",
        "Transit and Fuel",
        "Media & streaming", ...
    ]
}

Fun fact: For the Enum case, FastMCP used to generate a different JSON schema that used "references", but multiple models errored when they saw that schema. FastMCP decided to simplify both cases to always output the flat enum array to reduce model errors.

We can combine these approaches, wrapping an Enum with a description, like so:

category: Annotated[
    Category,
    Field(
        description=(
            "Choose the closest category. "
            "If truly unclear, use Misc.\n\n"
            "Heuristics: "
            "Food & drink=meals, coffee; "
            "Transit and Fuel=rideshare, "
            "gas, parking; ..."
        )
    ),
],

See full code in expenses_mcp.py.

Then the generated schema includes both the possible values and the description with guidance on selecting them:

"category": {
    "type": "string",
    "enum": [
        "Food & drink",
        "Transit and Fuel",
        "Media & streaming", ...
    ],
    "description": "Choose the closest
      category. If truly unclear, use
      Misc. Heuristics: Food & drink=
      meals, coffee; Transit and Fuel=
      rideshare, gas, parking; ..."
}

Any constraint should beat a bare string for something as free-form as category — but which of these schemas has the greatest impact on getting the agent to pass in the right one? To find out, I set up a series of evaluations.

Setting up evaluations

In my expenses MCP server, I defined multiple tools, each with a different version of the schema:

def add_expense_cat_b(category: Annotated[str, Field(description="...")], ...): ...

def add_expense_cat_c(category: Literal["Food & drink", ...], ...): ...

def add_expense_cat_d(category: Category, ...): ...

def add_expense_cat_e(category: Annotated[Category, Field(description="...")], ...): ...

Next, I created an agent using Pydantic AI and pointed it to my local expenses MCP server. Here's simplified code:

server = MCPServerStreamableHTTP(url="http://localhost:8000/mcp")

model = OpenAIResponsesModel(
    "gpt-4.1-mini",
    provider=OpenAIProvider(openai_client=azure_openai_client))

agent = Agent(
    model,
    system_prompt=(
        "You help users log expenses. "
        f"Today's date is {datetime.now().strftime('%B %-d, %Y')}."
    ),
    output_type=str,
    toolsets=[server],
)

result = await agent.run("I bought a sandwich for $12.50.")

See full code in pydanticai_agent.py.

Now, I needed a way to vary which tool schema the agent saw. Fortunately, Pydantic AI makes it easy to filter tools on MCP servers, using code like this:

toolset = server.filtered(
    lambda ctx, tool: tool.name == f"add_expense_cat_b")
agent = Agent(model, toolsets=[toolset], ...)
result = await agent.run(case.prompt)

See full code in runner.py.

I came up with 17 different example user inputs for the agent:

  • Yesterday I bought a sandwich for $12.50.
  • I bought a sandwich the Monday before this one for $12.50.
  • Two Mondays ago I spent $8.75 on coffee.
  • I bought a sandwich on the first Monday of this month for $12.50.
  • On the last day of last month I spent $25.99 on a movie ticket.
  • I paid $60 for gas on the last business day of last month.
  • The day before yesterday I spent $4.50 on coffee.
  • Three days ago I took an Uber to the airport for $38.
  • Last Friday I spent $18 on a movie ticket.
  • The day after tomorrow I will buy a bus pass for $20.
  • Yesterday I paid $65 for Instacart grocery delivery.
  • On the last day of last month I bought headphones for $79.99.
  • Yesterday I bought a car for 35000 USD.
  • I paid $0.99 for an app yesterday.
  • Yesterday I spent $200 on a spa treatment.
  • I spent €50 on dinner yesterday.
  • Ayer compré una laptop por 1200 dólares.

For each input, I stored the desired values for each of the tool parameters, like so:

EXPENSE_CASES = [
    ExpenseCase(
        name="clear_food_yesterday",
        prompt="Yesterday I bought a sandwich for $12.50.",
        expected_category="Food & drink",
        expected_date=get_yesterday(),
        expected_amount=12.50,
    ),
    ...  # 17 cases ➡️
]

See full code in dataset.py.

I ran the agent on each input for each tool schema variant, a total of 17x4 cases, 68 runs total:

for variant in ["cat_b", "cat_c", "cat_d", "cat_e"]:
    toolset = server.filtered(
        tool_filter=lambda t: t.name == f"add_expense_{variant}")
    agent = Agent(model, toolsets=[toolset], ...)
    for case in EXPENSE_CASES:
        result = await agent.run(case.prompt)
        evals = run_all_evaluations(
            result.tool_calls, case)

See full code in runner.py.

Each time the agent ran, I inspected the tool calls to verify whether it had issued a tool call at all, and whether the tool call arguments matched my desired arguments. I recorded the results in both a JSON file and more human-readable Markdown file.

Evaluation results: category

For the four category variants, these are the results across the 17 cases:

Annotated[str]LiteralEnumAnnotated[Enum]
Was tool called?15/1716/1716/1717/17
When called, did category match expected?14/1513/1613/1615/17
Schema size (avg tokens)374412424836

There's no clear winner amongst the first three schemas. For the first schema, where we just provided a description, the agent was more likely to decide not to call the tool at all, and instead respond with a clarifying question, like "could you please provide a category?". That may be desirable for some scenarios, to encourage agents to ask users in the face of ambiguity, but if we believe that we've provided enough information in the schema for the agent to make a clear choice, then our schema has failed. For the middle two schemas, where we provided just the enum options with no description, the agent was more likely to call the tool, but it selected the wrong category more often. That makes sense, since the schema lacked the description with the additional guidance.

The final schema is the clear winner, as the agent called the tool all the time, and matched the desired category the most often. There is a drawback of course, and that's why I included the schema size in the table: the combination of description and enum list increased the size of the schema to be double any of the other variants. That extra cost is likely worth it, but we always need to consider any improvements that increase quality at the expense of tokens.

You might be thinking, "hey, clearly stricter schemas are always better!" Alas, the story gets murkier.

Evaluation results: date

Remember that our add_expense tool also has the expense_date parameter, specified as a string in our basic schema. I wanted to make sure that those dates always came in a format that I could easily store in my database as YYYY-MM-DD, so I came up with three stricter schemas.

I started off by adding a description specifying the format:

expense_date: Annotated[
    str, "Date in YYYY-MM-DD format"
]

As a reminder, that generates this JSON schema:

"expense_date": {
    "description": "Date in YYYY-MM-DD format",
    "type": "string"
}

Then I discovered that FastMCP supports date as a type for tool parameters, so I added that variant:

expense_date: date

That results in this JSON schema:

"expense_date": {
    "format": "date",
    "type": "string"
}

I thought it'd be fun to also try using a regular expression, since FastMCP supports that too:

expense_date: Annotated[
    str,
    Field(pattern=r"^\d{4}-\d{2}-\d{2}$"),
],

See full code in expenses_mcp.py.

That schema adds a "pattern" field to the generated JSON schema:

"expense_date": {
    "pattern": "^\\d{4}-\\d{2}-\\d{2}$",
    "type": "string"
}

Here are the evaluation results running the Pydantic AI agent with gpt-4.1-mini across the 17 cases and 4 schema variants, including the bare string:

strAnnotated[str]dateField(pattern)
Was tool called?17/1717/1717/1717/17
Date match (of called)12/1712/1712/1712/17
Schema size (avg tokens)326406414423

Do you see what I see? Every single variant had the same success rates! The agent called the tool 100% of the time, and it matched the expected date the same fraction of the time. I expected to see lower success for that first schema, but even without any description at all, the agent always used YYYY-MM-DD format to specify the date. It appears that since I named the field with "_date" and YYYY-MM-DD is the standard ISO format for dates, that's what the model suggests. I suspect that if I had tried the evaluation with a SLM or the oldest tool-calling model possible, I may have seen worse results. With our frontier models, however, they do not need any additional prompting to produce a date in standard ISO formats.

Of course, you likely still want to use one of these schemas to guide the agents, to be on the safe side, and they fortunately do not increase the token size significantly. Personally, I like the date option, since that plays nicely with the rest of the Python server code.

You might be wondering about all the cases where the agent failed to suggest the right date. All of those failures were due to date math. For example, when the user says "Two Mondays ago I spent $8.75 on coffee.", the agent calculated the date as one Monday ago instead of two Mondays ago. If users were truly entering their data like this, then it might be a good idea to equip the server with some date calculation tools, or give the agent some guidance on when it should ask users to clarify the date.

Cross-model evaluations

After seeing the results for an agent powered by gpt-4.1-mini, I was super curious to see what would happen with both an older model and a newer model, so I deployed a gpt-4o and a gpt-5.3-codex and ran them through the same evaluations.

For the category schema variants, the results are very interesting:

Did agent call the tool?

Schemagpt-4o4.1-mini5.3-codex (med)
Annotated[str]17/1715/1717/17
Literal17/1716/1717/17
Enum17/1716/1717/17
Annotated[Enum]17/1717/1717/17

When called, did category match expected?

Schemagpt-4o4.1-mini5.3-codex (med)
Annotated[str]17/1714/1515/17
Literal15/1713/1613/17
Enum14/1713/1613/17
Annotated[Enum]17/1715/1715/17

As you can see, the gpt-4o model appears to be the winner: it always calls the tool, and it matches the category correctly 100% of the time, as long as it is provided a description. The gpt-5.3-codex model also always calls the tool, but it often chooses a different category than our desired category. So, at least for this particular scenario, the gpt-4o model aligns closer to our human decision-making process than the gpt-5.3-codex model.

But what if the newer model is just smarter than we are? Consider this example input and category choices:

"Yesterday I spent $200 on a spa treatment." with Annotated[Enum]

gpt-4o
🤖 Health & Fitness
gpt-4.1-mini
🤖 Apparel and Beauty
gpt-5.3-codex
🤖 Apparel and Beauty

We marked "spa treatment" as "Health & Fitness" in our data, but newer models both preferred "Apparel and Beauty". Both of them seem like reasonable options, so the model disagreement is pointing out the ambiguity in the categories of our ground truth data. If we really wanted "spa treatment" to be "Health & Fitness", then we may need to give that example in our category description. Or, we might decide to change our ground truth data entirely to agree with the newer model's category selection. That's one thing that I love about running evaluations: they force you to think more deeply about your expectations of LLMs in the face of diverse user inputs.

For the date schema variants, we see a very different story:

Did agent call the tool?

Schemagpt-4o4.1-mini5.3-codex (med)
str17/1717/1717/17
Annotated[str]17/1717/1717/17
date17/1717/1717/17
Field(pattern)17/1717/1717/17

When called, did date match expected?

Schemagpt-4o4.1-mini5.3-codex (med)
str15/1712/1717/17
Annotated[str]15/1712/1717/17
date15/1712/1717/17
Field(pattern)15/1712/1717/17

The gpt-5.3-codex model is the clear winner here, as it calls the tool and selects the right date 100% of the time. Remember how gpt-4.1-mini couldn't do the date match? Apparently this newer model can! It correctly calculated "two Mondays ago" and when told "the last day of the month", it even realized that there is no February 29th in 2026. The gpt-4.1-mini model has the worst results here, and that may be due to it being a "-mini" model.

But now we're scratching our heads, because gpt-5.3-codex was not the winner across the board, despite being the much newer model. Let's dig deeper.

Impact of reasoning effort

The gpt-5.3-codex model is a reasoning model - and reasoning models can have very different outputs based on their reasoning effort level. I ran that evaluation on "medium", the default level, but for that model, the effort can be either "low", "medium", "high", or "xhigh". Here are the results:

lowmediumhighxhigh
Did category match ground truth?100%88.2%88.2%88.2%
Did date match ground truth?100%100%100%100%
Schema size (average tokens)8628909391114
Latency (average ms)7,1297,4748,82811,554

For the category schema variants, we see that the category match percentage drops from 100% to 88% as soon as we go from "low" to "medium" and stays there. Meanwhile, higher reasoning effort steadily increases cost and latency: by the time we reach "xhigh", the schema size has grown by 250 tokens and the latency has nearly doubled.

Since this is a reasoning model, we can actually dig into the reasoning traces, to give us more insight into how we might steer the model more towards our desired category.

Here's how it reasons on "low" mode:

I’m considering categories like Health & Fitness or Apparel/Beauty since it’s for a spa treatment. There’s ambiguity because spa treatments could fit into personal care or wellness, but I might lean towards Apparel and Beauty. However, Health & Fitness could work too, especially under wellness. I’ll go with one of those!

As you can see, it's not even sure which category to select at this level of reasoning - we only see its final selection in the tool call itself. Compare that to "medium" mode:

The category for this expense is a spa treatment. I'm considering whether it should go under Apparel and Beauty or Health & Fitness. Using some heuristics, personal care fits in Apparel and Beauty, while wellness aligns with Health & Fitness. Since a spa treatment feels more like a beauty or personal care choice, I think I'll choose Apparel and Beauty.

This time, it explicitly decided on the category in its reasoning, and called the tool with the selection. Once again, you might agree with the model's choice here, and change the ground truth itself.

When we are developing MCP servers, we're not necessarily in control of the models powering the agents that call those MCP servers. Ideally, we're designing tool schemas that are clear and constrained enough so that all the most popular models at all reasoning effort levels will call our tools the way we expect.

Comparing agent frameworks

We live in a world with hundreds of agent frameworks and coding agent tools. All of them share a common approach: calling tools in a loop until the user's goal is reached. Behind the scenes, agent implementation varies. Some agents attach their own system prompts to your prompt; some agents add in memory and caching; some agents have special built-in reflection and retry loops. So when it comes to calling MCP servers, how much variance might we expect to see?

For my final evaluation, I wrote an agent using the GitHub Copilot SDK, and gave it the same system prompt and MCP server connection as the Pydantic AI agent. The simplified code:

client = CopilotClient()

session = await client.create_session(SessionConfig(
    model="gpt-5.3-codex",
    mcp_servers={
        "expenses": MCPRemoteServerConfig(
            type="http",
            url="http://localhost:8000/mcp",
            tools=["add_expense_cat_e"],
        )
    },
    system_message={
        "mode": "replace",
        "content": "You help users log expenses. "
            f"Today's date is {datetime.now().strftime('%B %-d, %Y')}.",
    },
))

await session.send_and_wait({"prompt": "I bought a sandwich for $12.50."})

See full code in copilotsdk_agent.py.

For the evaluation, I used the gpt-5.3-codex model on medium effort across the 4 category schemas, 4 date schemas, and 17 variants. The results:

Was tool called at all?

SchemaPydantic AICopilot SDK
Annotated[str]17/1717/17
Literal17/1717/17
Enum17/1717/17
Annotated[Enum]17/1717/17

Did category match expected?

SchemaPydantic AICopilot SDK
Annotated[str]15/1715/17
Literal13/1713/17
Enum13/1713/17
Annotated[Enum]15/1715/17

Did date match expected?

SchemaPydantic AICopilot SDK
str17/1717/17
Annotated[str]17/1717/17
date17/1717/17
Field(pattern)17/1717/17

The success rates are exactly the same across both agents! Now, I will confess that in my first attempt at evaluation, the Copilot SDK agent had an off-by-one error for each date it selected, and I suspect there's a UTC date somewhere in the default system prompt. When I re-ran the evaluation at a date where UTC and my timezone (PT) were the same, the dates were all correct. You learn all sorts of things when running evaluations.

Takeaways

I went into this investigation certain that I would see significant improvement from agents when I used stricter types and constraints for the parameter types. I realize now that the models have improved so much and been so robustly trained for tool calling, that they often do not need the specificity of the stricter types. They mostly need clarity whenever there is ambiguity, and that can come in the form of a string description.

However, there are still other benefits to using stricter schemas, like increased type safety and validation in our MCP server codebase. Personally, I would rather use date for the date input and Enum for the category input, as those lead to cleaner code inside the tool code.

LLMs, and the agents powered by them, are both non-deterministic and not that predictable. The only way to really see how an agent will respond to your MCP server tool schemas is to set up evaluations for the scenarios that you care about. If you're new to evaluations, check out the fantastic resources from ML engineer Hamel Husain.

I learned a lot during this investigation, and hope my approach is useful to you as well. All of the code — the MCP server, schema variants, agents, and evaluation framework — is available in my GitHub repository, so feel free to explore, adapt, and run your own experiments. Please share any of your own experience with MCP tool schemas and evaluations with me. Thank you!

Thursday, March 12, 2026

Can MCP choose my outfit?

When I was a kid, one of my first Java applets was a UI for choosing outfits by mixing and matching different articles of clothing. Now, with the advent of agents and MCP, I realized that I could make a modern, more dynamic version: an MCP server that can find relevant clothing based off a user query, and render matching clothing as a slideshow. Let's walk through the experience and code powering it.


Searching for relevant clothing

After connecting VS Code to my closet MCP server, I ask a query like:

i am presenting at PyAI about MCP, do I have MCP themed clothing? show me the best option.

GitHub Copilot decides that it can use the closet MCP server to answer that question, and it calls the image_search tool with these arguments:

{
  "query": "MCP Model Context Protocol themed clothing",
  "max_results": 5
}

The tool call returns a mix of binary files - thumbnails for each matching article of clothing, and structured data- a JSON containing filename, display name, and description for each article.

{
  "results": [
    {
      "filename": "IMG_3234.jpg",
      "display_name": "IMG_3234.jpg",
      "description": "The image shows a black sleeveless dress hanging on a white hanger against a plain wall. The dress has a printed text on the front that reads: \"YOU DOWN WITH MCP? Yeah, you know me!\" The first line is in large white uppercase letters, and the second line is in smaller pink cursive letters. The dress has a fitted top and a flared skirt."
    },...

Here's what that looks like in the GitHub Copilot chat interface. Notice that Copilot attaches the images, so I can actually click on them to see each result directly in VS Code, as if they were a file in the workspace.

Now let's look at the code powering that tool call. I built the server using FastMCP, so I declare my tools by wrapping functions in mcp.tool() decorator and annotating the arguments with types and helpful descriptions. Inside the function, I use Azure AI Search with hybrid retrieval on both the text query and the query's vector, against a target index that has multimodal embeddings for the images plus LLM-generated descriptions for the images. The tool returns a result that contains both the binary files and the structured content.

@mcp.tool()
async def image_search(
  query: Annotated[
    str, "Text description of images to find (e.g., 'red dress')"
  ],
  max_results: Annotated[int, "Max number of images to return (1-20)"] = 5,
) -> ToolResult:
  """
  Search for images matching a natural language query.
  Returns the image data and descriptions.
  """
  results = await search_client.search(
    search_text=query,
    top=max_results,
    vector_queries=[VectorizableTextQuery(
        k_nearest_neighbors=max_results, fields="embedding", text=query)],
    select="metadata_storage_path,verbalized_image")

  blob_service_client = get_blob_service_client()

  files: list[File] = []
  image_results: list[dict[str, str]] = []
  result_index = 0
  async for result in results:
    result_index += 1
    url = result["metadata_storage_path"]
    description = result.get("verbalized_image")
    container_name, blob_name = get_blob_reference_from_url(url)
    blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_name)
    stream = await blob_client.download_blob()
    image_bytes = await stream.readall()
    image_format = get_image_format(url)
    display_name = os.path.basename(blob_name)
    file_basename = Path(display_name).stem
    thumbnail_bytes = resize_image_bytes(image_bytes, image_format)
    files.append(File(data=thumbnail_bytes, format=image_format, name=file_basename))
    image_results.append({
      "filename": blob_name,
      "display_name": display_name,
      "description": description})

  return ToolResult(
    content=files,
    structured_content={
      "query": query,
      "results": image_results})

Displaying selected clothing

Once the agent finds possible matching clothing, it then reasons over the results and selects the best of those results. If the agent is using a multimodal LLM, like most modern frontier models, it's able to reason above both the image content and the image descriptions. It can then render its top choices directly in the UI, using an MCP app that renders a JavaScript-powered slideshow of images.

Here's what that looks like in GitHub Copilot chat:

Let's check out the code that powers that MCP app. An app is actually a kind of tool, so we once again wrap a Python function in @mcp.tool. However, this time, we specify that it's an app with an AppConfig with an associated resource for the image viewer HTML. Inside that function, we fetch the images from Azure Blob Storage based off their filename, return both the binary data for the images and structured content that includes the filename and mime-type of each image.

@mcp.tool(
  app=AppConfig(resource_uri=IMAGE_VIEW_URI)
)
async def display_image_files(
  filenames: Annotated[list[str], "List of image filenames to retrieve"]
) -> ToolResult:
  """Fetch images by filename and render them in a carousel display."""
  blob_service_client = get_blob_service_client()

  image_blocks: list[types.ImageContent] = []
  image_results: list[dict[str, str]] = []
  for filename in filenames:
    blob_client = blob_service_client.get_blob_client(container=IMAGE_CONTAINER_NAME, blob=filename)
    stream = await blob_client.download_blob()
    image_bytes = await stream.readall()
    mime_type = get_image_mime_type(filename)
    image_blocks.append(
      types.ImageContent(
        type="image",
        data=base64.b64encode(image_bytes).decode("utf-8"),
        mimeType=mime_type))
    image_results.append({
      "filename": filename,
      "mimeType": mime_type})

  return ToolResult(
    content=image_blocks,
    structured_content={
      "images": image_results,
  })

Next we need to define the resource that serves up the image viewer HTML page. We wrap a Python function in @mcp.resource, assign it a "ui://" URL that is unique for our MCP server, and declare what servers are allowed in its Content-Security Policy (CSP):

@mcp.resource(
    IMAGE_VIEW_URI,
    app=AppConfig(csp=ResourceCSP(resource_domains=["https://unpkg.com"])),
)
def image_view() -> str:
    """Render images returned by display_image_files as an MCP App."""
    return load_image_viewer_html()

Finally, we need the actual HTML that will render inside the iframed app. This tiny webpage brings in ext-apps, a JavaScript package which manages bidirectional communication with the MCP client. In our JavaScript, we declare an App instance, define the ontoolresult callback, and connect the app. That callback receives the images from the tool result and renders them inside the HTML. Note that apps also can communicate back, but that wasn't necessary for this UI.

<!DOCTYPE html>
<html>
<body>
  <div id="carousel">
    <button id="prev" type="button" aria-label="Previous">&#8249;</button>
    <div id="frame"></div>
    <button id="next" type="button" aria-label="Next">&#8250;</button>
    <span id="counter" aria-live="polite"></span>
  </div>
  <script type="module">
    import { App } from "https://unpkg.com/@modelcontextprotocol/ext-apps@0.4.0/app-with-deps";

    const app = new App({ name: "Image Viewer", version: "1.0.0" });

    let images = [];
    let index = 0;

    const frame = document.getElementById("frame");
    const prevBtn = document.getElementById("prev");
    const nextBtn = document.getElementById("next");
    const counter = document.getElementById("counter");

    function show(i) {
      index = i;
      const img = images[index];
      frame.innerHTML = "";
      const el = document.createElement("img");
      el.src = `data:${img.mimeType || "image/jpeg"};base64,${img.data}`;
      el.alt = "Blob image";
      frame.appendChild(el);
      prevBtn.disabled = index === 0;
      nextBtn.disabled = index === images.length - 1;
      counter.textContent = images.length > 1 ? `${index + 1} / ${images.length}` : "";
    }

    prevBtn.addEventListener("click", () => { if (index > 0) show(index - 1); });
    nextBtn.addEventListener("click", () => { if (index < images.length - 1) show(index + 1); });

    app.ontoolresult = ({ content }) => {
      images = (content || []).filter((block) => block.type === "image");
      if (images.length > 0) show(0);
    };

    await app.connect();
  </script>
</body>
</html>

Putting together the final outfit

If I want more ideas of how to put together my outfit, I can keep asking questions that will prompt the agent to call the MCP server. For example, my first follow-up question was:

great, i love the pink, matches pydantic-ai colors. can you find some pink accessories to go with it?

Then, after it suggested some nice accessories, I finished with:

sounds good. i also need a jacket to keep me warm. show me my final outfit.

To show me my final outfit, it called the display_image_files tool with only the selected articles of clothing - jacket, dress, and earrings. I can navigate through them with the arrows:

MCP app rendering a jacket inside VS Code

How'd the outfit work out? Pretty great!

Try it yourself!

The full MCP server code is available in the Azure-Samples/image-search-aisearch, along with a minimal frontend for image searching and data ingestion via Azure AI Search indexer with Azure OpenAI LLMs (for describing the images) and Azure AI Vision (for multi-modal embeddings of the images). The code can be used for any images, not just pictures of your clothing.

Here are ways you could improve it:

  • Use an image-generation model: visualize the head-to-toe outfit on a mannequin (instead of showing each item separately in the carousel)
  • Optimize token consumption: currently, since it returns each image thumbnail when searching, and images require a lot of tokens to represent them, conversations can easily exceed the context window. You could experiment with smaller images, higher compression, or other approaches.
  • Add user login: my MCP server is a public endpoint, but most people don't want their closet (or private images) to be public knowledge. You can add on key-based auth or OAuth using the FastMCP auth providers, as I described in the MCP auth livestream.

Have fun, and let me know if you build your own version!

Wednesday, March 11, 2026

Learnings from the PyAI conference

I recently spoke at the PyAI conference, put on by the good folks at Prefect and Pydantic, and I learnt so much from the talks I attended. Here are my top takeaways from the sessions that I watched:


AI Evals Pitfalls

Hamel Husain

  • View slides
  • Hamel cautioned against blindly using automated evaluation frameworks and built-in evaluators (like helpfulness and coherence).
  • Instead, we should adopt a data science approach to evaluation: explore the data, discover what's actually breaking, identify the most important metric, and iterate as new data comes in.
  • We shouldn't just trust an LLM-as-a-judge to be given accurate scores. Instead, we should validate it like we would validate a ML classifier- with labeled data, train/dev/test splits, and precision/recall metrics. LLM-judges should always give pass/fail results, instead of 1-5 scores, so that there's no ambiguity in their judgment.
  • When generating synthetic data, first come up with dimensions (such as persona), generate combinations based off dimensions, and convert those into realistic queries.
  • Hamel created evals-skills, a collection of skills for coding agents that can be run against evaluation pipelines to find issues like poorly designed LLM-judges.

Build Reasonable Software

Jeremiah Lowin (FastMCP/Prefect)

  • Write your Python programs in a way that coding agents can reason about them, so that they can more easily maintain and build them. For example, FastMCP v2 SDK was not well designed (bad abstractions) so a new CodeMod feature required 4,000 lines of code. In the new FastMCP v3 SDK (same functional API, different abstractions backing it), the same feature only required 500 lines of code.
  • To make Python FastMCP servers more Pythonic, Jeremiah is developing a new package for MCP apps which includes the most common UIs (forms/tables/charts), called PreFab: https://github.com/PrefectHQ/prefab

Panel: Open Source in the Age of AI

Guido van Rossum (CPython), Samuel Colvin (Pydantic), Sebastián Ramírez (FastAPI), Jeremiah Lowin (FastMCP)

  • OSS maintainers are overwhelmed by AI Slop PRs. As one maintainer said, "Don't expect someone else to be the first one to read your code". Each maintainer is coming up with different systems/bots/heuristics to detect and triage PRs (like FastMCP auto-rejects PRs that are too long!). Some maintainers are going to turn off PRs entirely, as now permitted by GitHub.
  • Samuel's opinion: GitHub should add a "human identity" vs "user identity", as well as a user reputation system where reputation is based off how many useful contributions you've made (or a "sloppiness" metric).

Do developer tools matter to agents?

Zanie Blue (Astral)

  • Astral is considering ways to make their tools more agent-friendly. For example, their error messages for ty are currently fairly long and include ASCII arrows pointing to the code in question, and they suspect the agents may not need all of that in their context.
  • Astral is also re-prioritizing based off the move towards 100% agentic coding, with less emphasis on tools that would be used solely by a developer who is manually typing. For example, they were once considering adding a "review" feature to review each ruff suggestion one-by-one, but that seems unlikely to be used by developers these days.
  • Astral may now be able to take advantage of agent's ability to reason over whether proposed ruff fixes are safe. Currently, ruff only auto-fixes code when it knows that the code change can't introduce any unwanted changes (like comment deletions), and it marks other fixes as "unsafe". Now ruff could add more unsafe fixes, knowing that an LLM could decide whether it was actually a safe change.

Context Engineering for MCP Servers

Till Döhmen (MotherDuck)

  • Till walked through the multi-step process of developing MCP servers to allow developers to interact with their MotherDuck databases. The server started with a single "query" tool, which later split into multiple tools, including "list_databases" and "list_tables". They had to offer dedicated schema-exploration tools since DuckDB uses a different syntax than PostgreSQL, and the agents kept suggesting PostgreSQL syntax that didn't work.
  • They also added a tool to search the documentation (powered by the same search used by their website) and a tool that teaches the agent how to create "dive"s, a visualization of the database state.
  • One of their big struggles is the lack of MCP spec support across clients: the MCP spec is so rich and full of features, but only a handful of clients support those features. It's hard for them to take advantage of the new features, knowing their users may be using a client that does not support them.

Controlling the wild: from tool calling to computer use

Samuel Colvin (Pydantic)

  • Samuel built Monty to be a minimal implementation of Python for agents to use. It intentionally does not support all of the Python standard lib (like sockets/file open), but does include a way to call back to functions on the host. When using monty, you do not need to setup a separate sandbox.
  • Monty is not designed to run full applications - it's designed to run Python code generated by agents.
  • The models vary in how successfully they call monty in a REPL loop- Opus 4.5 works the best, Opus 4.6 works worse, presumably due to the RLHF process teaching 4.6 to execute code in a particular way.
  • github.com/pydantic/monty

What's new in FastAPI for AI

Sebastián Ramírez (FastAPI)

  • View slides
  • There's now a VS Code extension for FastAPI, built by my brilliant former colleague, Savannah Ostrowski. It makes it easy to navigate to different routes in your app, and it adds a CodeLens for navigating from pytest tests back to the route that they're testing.
  • FastAPI has built-in support for streaming JSON lines! Just yield an AsyncIterable. I plan to port my FastAPI streaming chat apps to this approach, pronto.
  • In pyproject.toml, you can now specify the FastAPI entrypoint, so that the fastapi command knows exactly where your FastAPI app is.

Context Engineering 2.0: MCP, Agentic RAG & Memory

Simba Khadder (Redis)

  • Redis is adding many features to specifically help developers who are creating apps with generative AI. For example, they've added a semantic caching of queries, based off a fine-tuned BERT model, so that developers don't have to pay every time someone says "good morning" to a chatbot. Anyone can use semantic caching in open-source Redis by bringing your own LLMs, but the fine-tuned model is available only for Redis Cloud.

Friday, January 16, 2026

Using on-behalf-of flow for Entra-based MCP servers

In December, we presented a series about MCP, culminating in a session about adding authentication to MCP servers. I demoed a Python MCP server that uses Microsoft Entra for authentication, requiring users to first login to the Microsoft tenant before they could use a tool. Many developers asked how they could take the Entra integration further, like to check the user's group membership or query their OneDrive. That requires using an "on-behalf-of" flow, also known as "delegation" in OAuth, where the MCP server uses the user's identity to call another API, like the Microsoft Graph API. In this blog post, I will explain how to use Entra with OBO flow in a Python FastMCP server.

How MCP servers can use Entra authentication

The MCP authorization specification is based on OAuth2, but with some additional features tacked on top. Every MCP client is actually an OAuth2 client, and each MCP server is an OAuth2 resource server.

Diagram of OAuth 2.1 entities with MCP client and server

MCP auth adds these features to help clients determine how to authorize a server:

  • Protected resource metadata (PRM): Implemented on the MCP server, provides details about the authorization server and method
  • Authorization server metadata: Implemented on the authorization server, gives URLs for OAuth2 endpoints

Additionally, to allow MCP servers to work with arbitrary MCP clients, MCP auth supports either of these client registration methods:

  • Dynamic Client Registration (DCR): Implemented on the authorization server, it can register new MCP clients as OAuth2 clients, even if it hasn't seen them before.
  • Client ID Metadata Documents (CIMD): An alternative to DCR, this requires both the MCP client to make a CIMD document available on a server, and requires the authorization server to fetch the CIMD document for details about the client.

Microsoft Entra does support authorization server metadata, but it does not support either DCR or CIMD. That's actually fine if you are building an MCP server that's only going to be used with pre-authorized clients, like if the server will only be used with VS Code or with a specific internal MCP client. But, if you are building an MCP server that can be used with arbitrary MCP clients, then either DCR or CIMD is required. So what do we do?

Fortunately, the FastMCP SDK implements DCR on top of Entra using an OAuth proxy pattern. FastMCP acts as the authorization server, intercepting requests and forwarding to Entra when needed, and storing OAuth client information in a designated database (like in-memory or Cosmos DB).

⚠️ Warning: This proxy approach is intended only for development and testing scenarios. For production deployments, Microsoft recommends using pre‑registered client applications where client identifiers and permissions are explicitly created, reviewed, and approved on a per-app basis.

Diagram of OAuth proxy pattern

Let's walk through the steps to set that up.

Registering the server with Entra

Before the server can use Entra to authorize users, we need to register the server with Entra via an app registration. We can do registration using the Azure Portal, Azure CLI, Microsoft Graph SDK, or even Bicep. In this case, I use the Python MS Graph SDK as it allows me to specify everything programmatically.

First, I create the Entra app registration, specifying the sign-in audience (single-tenant), redirect URIs (including local MCP server and VS Code redirect URIs), and the scopes for the exposed API.

request_app = Application(
  display_name="FastMCP Server App",
  sign_in_audience="AzureADMyOrg",  # Single tenant
  web=WebApplication(
   redirect_uris=[
        "http://localhost:8000/auth/callback",
        "https://vscode.dev/redirect",
        "http://127.0.0.1:33418",
        "https://deployedurl.com/auth/callback"
    ],
  ),
  api=ApiApplication(
    oauth2_permission_scopes=[
      PermissionScope(
        id=uuid.UUID("{" + str(uuid.uuid4()) + "}"),
        admin_consent_display_name="Access FastMCP Server",
        admin_consent_description="Allows access to the FastMCP server as the signed-in user.",
        user_consent_display_name="Access FastMCP Server",
        user_consent_description="Allow access to the FastMCP server on your behalf",
        is_enabled=True,
        value="mcp-access",
        type="User",
      )],
    requested_access_token_version=2,  # Required by FastMCP
  )
)
app = await graph_client.applications.post(request_app)

await graph_client.applications.by_application_id(app.id).patch(
  Application(identifier_uris=[f"api://{app.app_id}"]))

Thanks to that configuration, when an MCP client like VS Code requests an OAuth2 token, it will request a token with the scope "api://{app.app_id}/mcp-access", and the FastMCP server will validate that incoming tokens contain that scope.

Next, I create a Service Principal for that Entra app registration, which represents the Entra app in my tenant

request_principal = ServicePrincipal(app_id=app.app_id, display_name=app.display_name)
await graph_client.service_principals.post(request_principal)

I need a way for the server to prove that it can use that Entra app registration, so I register a secret:

password_credential = await graph_client.applications.by_application_id(app.id).add_password.post(
  AddPasswordPostRequestBody(
    password_credential=PasswordCredential(display_name="FastMCPSecret")))

Ideally, I would like to move away from secrets, as Entra now has support for using federated identity credentials for Entra app registrations instead, but that form of credential isn't supported yet in the FastMCP SDK. If you choose to use a secret, make sure that you store the secret securely.

Granting admin consent

This next step is only necessary when our MCP server wants to use an OBO flow to exchange access tokens for other resource server tokens (Graph API tokens, in this case). For the OBO flow to work, the Entra app registration needs permission to call the Graph API on behalf of users. If we controlled the client, we could force it to request the required scopes as part of the initial login dialog. However, since we are configuring this server to work with arbitrary MCP clients, we don't have that option. Instead, we grant admin consent to the Entra app for the necessary scopes, such that no Graph API consent dialog is needed.

This code grants the admin consent to the associated service principal for the Graph API resource and scopes:

server_principal = await graph_client.service_principals_with_app_id(app.app_id).get()
grant = GrantDefinition(
    principal_id=server_principal.id,
    resource_app_id="00000003-0000-0000-c000-000000000000", # Graph API
    scopes=["User.Read", "email", "offline_access", "openid", "profile"],
    target_label="server application")
resource_principal = await graph_client.service_principals_with_app_id(grant.resource_app_id).get()
desired_scope = grant.scope_string()
await graph_client.oauth2_permission_grants.post(
  OAuth2PermissionGrant(
    client_id=grant.principal_id,
    consent_type="AllPrincipals",
    resource_id=resource_principal.id,
    scope=desired_scope))

If our MCP server needed to use an OBO flow with another resource server, we could request additional grants for those resources and scopes.

Our Entra app registration is now ready for the MCP server, so let's move on to see the server code.

Using FastMCP servers with Entra

In our MCP server code, we configure FastMCP's built in AzureProvider based off the details from the Entra app registration process:

auth = AzureProvider(
    client_id=os.environ["ENTRA_PROXY_AZURE_CLIENT_ID"],
    client_secret=os.environ["ENTRA_PROXY_AZURE_CLIENT_SECRET"],
    tenant_id=os.environ["AZURE_TENANT_ID"],
    base_url=entra_base_url, # MCP server URL
    required_scopes=["mcp-access"],
    client_storage=oauth_client_store, # in-memory or Cosmos DB
)

To make it easy for our MCP tools to access an identifier for the currently logged in user, we define a middleware that inspects the claims of the current token using FastMCP's get_access_token() and sets the "oid" (Entra object identifier) in the state:

class UserAuthMiddleware(Middleware):
    def _get_user_id(self):
        token = get_access_token()
        if not (token and hasattr(token, "claims")):
            return None
        return token.claims.get("oid")

    async def on_call_tool(self, context: MiddlewareContext, call_next):
        user_id = self._get_user_id()
        if context.fastmcp_context is not None:
            context.fastmcp_context.set_state("user_id", user_id)
        return await call_next(context)

    async def on_read_resource(self, context: MiddlewareContext, call_next):
        user_id = self._get_user_id()
        if context.fastmcp_context is not None:
            context.fastmcp_context.set_state("user_id", user_id)
        return await call_next(context)

When we initialize the FastMCP server, we set the auth provider and include that middleware:

mcp = FastMCP("Expenses Tracker",
  auth=auth,
  middleware=[UserAuthMiddleware()])

Now, every request made to the MCP server will require authentication. The server will return a 401 if a valid token isn't provided, and that 401 will prompt the MCP client to kick off the MCP authorization flow.

Inside each tool, we can grab the user id from the state, and use that to customize the response for the user, like to store or query items in a database.

@mcp.tool
async def add_user_expense(
    date: Annotated[date, "Date of the expense in YYYY-MM-DD format"],
    amount: Annotated[float, "Positive numeric amount of the expense"],
    description: Annotated[str, "Human-readable description of the expense"],
    ctx: Context,
):
  """Add a new expense to Cosmos DB."""
  user_id = ctx.get_state("user_id")
  if not user_id:
    return "Error: Authentication required (no user_id present)"
  expense_item = {
    "id": str(uuid.uuid4()),
    "user_id": user_id,
    "date": date.isoformat(),
    "amount": amount,
    "description": description
  }
  await cosmos_container.create_item(body=expense_item)

Using OBO flow in FastMCP server

Now we have everything we need to use an OBO flow inside the MCP tools, when desired. To make it easier to exchange and validate tokens, we use the Python MSAL SDK, configuring a ConfidentialClientApplication similarly to how we set up the FastMCP auth provider:

confidential_client = ConfidentialClientApplication(
    client_id=os.environ["ENTRA_PROXY_AZURE_CLIENT_ID"],
    client_credential=os.environ["ENTRA_PROXY_AZURE_CLIENT_SECRET"],
    authority=f"https://login.microsoftonline.com/{os.environ['AZURE_TENANT_ID']}",
    token_cache=TokenCache(),
)

Inside the tool that requires OBO, we ask MSAL to exchange the MCP access token for a Graph API access token:

access_token = get_access_token()
graph_resource_access_token = confidential_client.acquire_token_on_behalf_of(
  user_assertion=access_token.token, scopes=["https://graph.microsoft.com/.default"]
)
graph_token = graph_resource_access_token["access_token"]

Once we successfully acquire the token, we can use that token with the Graph API, for any operations permitted by the scopes in the admin consent granted earlier. For this example, we call the Graph API to check whether the logged in user is a member of a particular Entra group, and restrict tool usage if not:

async with httpx.AsyncClient() as client:
  url = ("https://graph.microsoft.com/v1.0/me/transitiveMemberOf/microsoft.graph.group"
    f"?$filter=id eq '{group_id}'&$count=true")
  response = await client.get(
    url,
    headers={
      "Authorization": f"Bearer {graph_token}",
      "ConsistencyLevel": "eventual",
  })
  data = response.json()
  membership_count = data.get("@odata.count", 0)

You could imagine many other ways to use an OBO flow however, like to query for more details from the Graph API, upload documents to OneDrive/SharePoint/Notes, send emails, and more!

All together now

For the full code, check out the open source python-mcp-demos repository, and follow the deployment steps for Entra. The most relevant code files are:

  • auth_init.py: Creates the Entra app registration, service principal, client secret, and grants admin consent for OBO flow.
  • auth_update.py: Updates the app registration's redirect URIs after deployment, adding the deployed server URL.
  • auth_entra_mcp.py: The MCP server itself, configured with FastMCP's AzureProvider and tools that use OBO for group membership checks.

I want to reiterate once more that the OAuth proxy approach is intended only for development and testing scenarios. For production deployments, Microsoft recommends using pre‑registered client applications where client identifiers and permissions are explicitly created, reviewed, and approved on a per-app basis. I hope that in the future, Entra will formally support MCP authorization via the CIMD protocol, so that we can build MCP servers with Entra auth that work with MCP clients in a fully secure and production-ready way.

As always, please let me know if you have further questions or ideas for other Entra integrations.

Friday, December 19, 2025

Watch the recordings from my Python + MCP series

MCP is one of the fastest growing technologies in the Generative AI space this year, and the first AI related standard that the industry has really embraced wholeheartedly. I just gave a three-part live stream series all about Python + MCP. I showed how to:

  • Build MCP servers in Python using FastMCP
  • Deploy them into production on Azure (Container Apps and Functions)
  • Add authentication, using either Keycloak and Microsoft Entra as the OAuth provider

All of the materials from our series are available and linked below:

  • Video recordings of each stream
  • Powerpoint slides
  • Open-source code samples complete with Azure infrastructure and 1-command deployment

If you're an instructor, feel free to use the slides and code examples in your own classes. 
Spanish speaker? My colleague delivered a fantastic Spanish version of the series.

Building MCP servers with FastMCP

YouTube video
📺 Watch YouTube recording

In the intro session of our Python + MCP series, we dive into MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally. Then we consume that server from chatbots like GitHub Copilot in VS Code, using it's tools, resources, and prompts. Finally, we discover how easy it is to connect AI agent frameworks like Langchain and Microsoft agent-framework to the MCP server.

Deploying MCP servers to the cloud

YouTube video
📺 Watch YouTube recording

In our second session of the Python + MCP series, we deploy MCP servers to the cloud! We walk through the process of containerizing a FastMCP server with Docker and deploying to Azure Container Apps. Then we instrument the MCP server with OpenTelemetry and observe the tool calls using Azure Application Insights and Logfire. Finally, we explore private networking options for MCP servers, using virtual networks that restrict external access to internal MCP tools and agents.

Authentication for MCP servers

YouTube video
📺 Watch YouTube recording

In our third session of the Python + MCP series, we explore the best ways to build authentication layers on top of your MCP servers. We start off simple, with an API key to gate access, and demonstrate a key-restricted FastMCP server deployed to Azure Functions. Then we move on to OAuth-based authentication for MCP servers that provide user-specific data. We dive deep into MCP authentication, which is built on top of OAuth2 but with additional requirements like PRM and DCR/CIMD, which can make it difficult to implement fully. We demonstrate the full MCP auth flow in the open-souce identity provider KeyCloak, and show how to use an OAuth proxy pattern to implement MCP auth on top of Microsoft Entra.

Friday, October 31, 2025

Watch the recordings from my Python + AI series

My colleague and I just wrapped up a live series on Python + AI, a nine-part journey diving deep into how to use generative AI models from Python. I gave the english streams while my colleague Gwen gave the spanish streams (and I hung out in her live chat, working on my technical spanish!).

The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP).

To apply the concepts, we put together code examples that run for free thanks to GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education. The examples are also compatible with local models (via Ollama), Azure OpenAI, or OpenAI.com models.

Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes.


Python + AI: Large Language Models

YouTube video
📺 Watch recording

In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps.


Python + AI: Vector embeddings

YouTube video
📺 Watch recording

In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models.


Python + AI: Retrieval Augmented Generation

YouTube video
📺 Watch recording

In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search.


Python + AI: Vision models

YouTube video
📺 Watch recording

Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine.


Python + AI: Structured outputs

YouTube video
📺 Watch recording

In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows.


Python + AI: Quality and safety

YouTube video
📺 Watch recording

This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM.


Python + AI: Tool calling

YouTube video
📺 Watch recording

In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session.


Python + AI: Agents

YouTube video
📺 Watch recording

In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows.


Python + AI: Model Context Protocol

YouTube video
📺 Watch recording

In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer.