Showing posts with label github. Show all posts
Showing posts with label github. Show all posts

Thursday, July 24, 2025

Automated repo maintenance via GitHub Copilot coding agent

I have a problem: I'm addicted to making new repositories on GitHub. As part of my advocacy role at Microsoft, my goal is to show developers how to combine technology X with technology Y, and a repository is a great way to prove it. But that means I now have hundreds of repositories that I am trying to keep working, and they require constant upgrades:

  • Upgraded Python packages, npm packages, GitHub Actions
  • Improved Python tooling (like moving from pip to uv, or black to ruff)
  • Hosted API changes (versions, URLs, deprecations)
  • Infrastructure upgrades (Bicep/Terraform changes)

All of those changes are necessary to keep the repositories working well, but they're both pretty boring changes to make, and they're very repetitive. In theory, GitHub already offers Dependabot to manage package upgrades, but unfortunately Dependabot hasn't worked for my more complex Python setups, so I often have to manually take over the Dependabot PRs. These are the kinds of changes that I want to delegate, so that I can focus on new features and technologies.

Fortunately, GitHub has introduced the GitHub Copilot coding agent, an autonomous agent powered by LLMs and MCP servers that can be assigned issues in your repositories. When you assign an issue to the agent, it will create a PR for the issue, put a plan in that PR, and ask for a review when it's made all the changes necessary. If you have comments, it can continue to iterate, asking for a review each time it thinks it's got it working.

I started off with some manual experimentation to see if GitHub Copilot could handle repo maintenance tasks, like tricky package upgrades. It did well enough that I then coded GitHub Repo Maintainer, a tool that searches for all my repos that require a particular maintenance task and creates issues for @Copilot in those repos with detailed task descriptions.

Here's what an example issue looks like:

Screenshot of issue assigned to Copilot agent

A few minutes after filing the issue, Copilot agent sends a pull request to address the issue:

Screenshot of PR from Copilot agent

To give you a feel for the kinds of issues that I've assigned to Copilot, here are more examples:

  • Update GitHub Actions workflow to use ubuntu-latest: This was an easy task. The only issue was with a more complex workflow where the latest ubuntu had a conflict with an additional service, and it came up with a roundabout way of fixing that.
  • Update Bicep to new syntax: This worked well when I provided it the exact new syntax to use. When I only told it that the old Bicep syntax was deprecated, it came up with a more convoluted way to fix it, and also tried fixing all the Bicep warnings too. It got sidetracked since the agent uses "az bicep build" to check the Bicep syntax validity, and that tool includes warnings by default, and Copilot generally likes to be a do-gooder and fix warnings too. I often will tell it explicitly "ignore the warnings, just fix the errors" for Bicep-related tasks.
  • Upgrade a tricky Python package: This was a harder upgrade as it required upgrading another package at the same time, something Dependabot had failed to do. Copilot was able to work it out, but only once I pointed out that the CI failed and reminded it to make sure to pip install the requirements file.
  • Update a deprecated URL: This was easy for it, especially because my tool tells it exactly which files it found the old URLs in.

Generally a good strategy has been for me to verify the right general fix in one repo, and then send that well-crafted issue to the other affected repos.

How to assign issues to GitHub Copilot

The GitHub documentation has a great guide on using the UI, API, or CLI to assign issues to the GitHub Copilot coding agent. When using the API, we have to first check if the Copilot agent is enabled, by doing a query to see if the repository's suggestedActors includes copilot-swe-agent. If so, then we grab the id of the agent and use that id when creating a new issue.

Here's what it looks like in Python to find the ID for the agent:

async def get_repo_and_copilot_ids(self, repo):
  headers = {"Authorization": f"Bearer {self.auth_token}", "Accept": "application/vnd.github+json"}
  query = '''
    query($owner: String!, $name: String!) {
      repository(owner: $owner, name: $name) {
        id
        suggestedActors(capabilities: [CAN_BE_ASSIGNED], first: 100) {
          nodes {
            login
             __typename
             ... on Bot { id }
          }
        }
      }
    }
  '''
  variables = {"owner": repo.owner, "name": repo.name}

  async with httpx.AsyncClient(timeout=self.timeout) as client:
    resp = await client.post(GITHUB_GRAPHQL_URL, headers=headers, json={"query": query, "variables": variables})
      resp.raise_for_status()
      data = resp.json()
    repo_id = data["data"]["repository"]["id"]
    copilot_node = next((n for n in data["data"]["repository"]["suggestedActors"]["nodes"]
        if n["login"] == "copilot-swe-agent"), None)
    if not copilot_node or not copilot_node.get("id"):
      raise RuntimeError("Copilot is not assignable in this repository.")
    return repo_id, copilot_node["id"]

The issue creation function uses that ID for the assignee IDs:

async def create_issue_graphql(self, repo, issue):
  repo_id, copilot_id = await self.get_repo_and_copilot_ids(repo)
  headers = {"Authorization": f"Bearer {self.auth_token}", "Accept": "application/vnd.github+json"}
  mutation = '''
  mutation($input: CreateIssueInput!) {
    createIssue(input: $input) {
      issue {
        id
        number
        title
        url
      }
    }
  }
  '''
  input_obj = {
    "repositoryId": repo_id,
    "title": issue.title,
    "body": issue.body,
    "assigneeIds": [copilot_id],
  }
  async with httpx.AsyncClient(timeout=self.timeout) as client:
    resp = await client.post(GITHUB_GRAPHQL_URL, headers=headers,
        json={"query": mutation, "variables": {"input": input_obj}})
    resp.raise_for_status()
    data = resp.json()
  issue_data = data.get("data", {}).get("createIssue", {}).get("issue")
  return {
    "number": issue_data["number"],
    "html_url": issue_data["url"]
  }

Lessons learned (so far!)

I've discovered that there are several intentional limitations on the behavior of the @Copilot agent:

  • Workflows must be approved before running: Typically, when a human contributor submits a pull request, and they're an existing contributor to the repository, the workflows automatically run on their PRs, and the contributor can see quickly if they need to fix any CI failures. For security reasons, GitHub requires a human to press "Approve and run workflows" on each push to a @Copilot PR. I will often press that, see that the CI failed, and comment @Copilot to address the CI failures. I would love to skip that manual process on my side, but I understand why GitHub is erring on the side of security here. See more details in their Copilot risk mitigation docs.
  • PRs must be marked "ready to review": Once again, typically a human contributor would start a PR in draft and mark it as "ready for review" before requesting a review. The @Copilot agent does not mark it as ready, and instead requires a human reviewer to mark it for them. According to my discussion with the GitHub team in the Copilot agent issue tracker, this is intentional to avoid triggering required reviews. However, I am hoping that GitHub adds a repository setting to allow the agent itself to mark PRs as ready, so that I can skip that trivial manual step.

I've also realized a few common ways that the @Copilot agent makes unsatisfactory PRs, and have started crafting issue descriptions better to improve the agent's success. My issue descriptions now include...

  • Validation steps: The agent will try to execute any validation steps, so if there are any that make sense, like running a pip install, a linter, or a script, I include those in the issue description. For example, for Bicep changes, issues include "After making this change, run `az bicep build` on `infra/main.bicep` to ensure the Bicep syntax is valid.".
  • How to make a venv: While testing its changes, the agent kept making Python virtual environments in directories other than ".venv", which is the only directory name that I use, and the one that's consistently in my .gitignore files. I would then see PRs that had 4,000 changed files, due to an accidentally checked in virtual environment folder. Now, in my descriptions, I tell it explicitly to create the venv in ".venv".

It's early days, but I'm pretty excited that there's a way that I can keep making ridiculous amounts of repositories and keep them well maintained. Definitely check out the GitHub Copilot coding agent to see if there are ways that it can help you automate the boring parts of repository maintenance.

Sunday, June 1, 2025

Teaching Python with Codespaces

Whenever I am teaching Python workshops, tutorials, or classes, I love to use GitHub Codespaces. Any repository on GitHub can be opened inside a GitHub Codespace, which gives the student a full Python environment and a browser-based VS Code. Students spend less time setting up their environment and more time actually coding - the fun part! In this post, I'll walk through my tips for using Codespaces for teaching Python, particularly for classes about web apps, data science, or generative AI.

Getting started

You can start a GitHub Codespace from any repository. Navigate to the front page of the repository, then select "Code" > "Codespaces" > "Create codespace on main":

By default, the Codespace will build an environment based off a universal Docker image, which includes Python, NodeJS, Java, and other popular languages.

But what if you want more control over the environment?

Dev Containers

A dev container is an open specification for describing how a project should be opened in a development environment, and is supported by several IDEs, including GitHub Codespaces and VS Code (via Dev Containers extension).

To define a dev container for your repository, add a devcontainer.json that describes the desired Docker image, VS Code extensions, and project settings. Let's look at a few examples, from simple to complex.

A simple dev container configuration

The simplest devcontainer.json specifies a Docker image, like from Docker Hub or the Microsoft Artifact Registry. Microsoft provides several Python-specific images optimized for dev containers.

For example, my python-3.13-playground repository sets up Python 3.13 using one of those images, and also configures a few settings and default extensions:

{
  "name": "Python 3.13 playground",
  "image": "mcr.microsoft.com/devcontainers/python:3.13-bullseye",
  "customizations": {
    "vscode": {
      "settings": { 
        "python.defaultInterpreterPath": "/usr/local/bin/python",
        "python.linting.enabled": true
      },
      "extensions": [
        "ms-python.python",
        "ms-python.vscode-pylance",
        "ms-python.vscode-python-envs"
      ]
    }
  }
}

The settings inside the "vscode" field will be used whenever the playground is opened in either GitHub Codespaces or local VS Code.

A dev container with Dockerfile

We can also customize a dev container with a custom Dockerfile, if we want to run additional system commands on the image.

For example, the python-ai-agent-frameworks-demos repository uses a Dockerfile to install required Python packages:

FROM mcr.microsoft.com/devcontainers/python:3.12-bookworm

COPY requirements.txt /tmp/pip-tmp/

RUN pip3 --disable-pip-version-check install -r /tmp/pip-tmp/requirements.txt \
    && rm -rf /tmp/pip-tmp

The devcontainer.json references the Dockerfile in the "build" section:

{
  "name": "python-ai-agent-frameworks-demos",
  "build": {
    "dockerfile": "Dockerfile",
    "context": ".."
  },
  "customizations": {
    "vscode": {
      "extensions": [
        "ms-python.python",
        "ms-azuretools.vscode-bicep"
      ],
      "python.defaultInterpreterPath": "/usr/local/bin/python"
    }
  },
  "remoteUser": "vscode"
}

You can also install OS-level packages in the Dockerfile, using Linux commands like apt-get, as you can see in this fabric-mcp-server Dockerfile.

A devcontainer with docker-compose.yaml

When our dev container is defined with a Dockerfile or image name, the Codespace creates an environment based off a single Docker container, and that is the container that we write our code inside.

It's also possible to setup multiple containers within the Codespace environment, with a primary container for our code development, plus additional services running on other containers. This is a great way to bring in containerized services like PostgreSQL, Redis, MongoDB, etc - anything that can be put in a container and exposed over the container network.

To configure a multi-container environment, add a docker-compose.yaml to the .devcontainer folder. For example, this docker-compose.yaml from my postgresql-playground repository configures a Python container plus a PostgreSQL container:

version: "3"

services:
  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile
      args:
        IMAGE: python:3.12
    volumes:
      - ..:/workspace:cached
    command: sleep infinity
    network_mode: service:db

  db:
    image: postgres:latest
    restart: unless-stopped
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: postgres
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: LocalPasswordOnly

volumes:
  postgres-data:

The devcontainer.json references that docker-compose.yaml file, and declares that the "service" container is the primary container for the environment:

{
  "name": "postgresql-playground",
  "dockerComposeFile": "docker-compose.yaml",
  "service": "app",
  "workspaceFolder": "/workspace",
...

Teaching Web Apps

Now let's look at topics you might be teaching in Python classes. One popular topic is web applications built with Python backends, using frameworks like Flask, Django, or FastAPI. A simple webapp can use the Python dev container from earlier, but if the webapp has a database, then you'll want to use the docker-compose setup with multiple containers.

Flask + DB

For example, my flask-db-quiz example configures a Flask backend with PostgreSQL database. The docker-compose.yaml is the same as the previous PostgreSQL example, and the devcontainer.json includes a few additional customizations:

{
  "name": "flask-db-quiz",
  "dockerComposeFile": "docker-compose.yaml",
  "service": "app",
  "workspaceFolder": "/workspace",
  "forwardPorts": [5000, 50505, 5432],
  "portsAttributes": {
    "50505": {"label": "Flask port", "onAutoForward": "notify"},
    "5432": {"label": "PostgreSQL port", "onAutoForward": "silent"}
  },
  "customizations": {
    "vscode": {
      "extensions": [
        "ms-python.python",
        "mtxr.sqltools",
        "mtxr.sqltools-driver-pg"
      ]
      "settings": {
        "sqltools.connections": [
          {
          "name": "Container database",
          "driver": "PostgreSQL",
          "previewLimit": 50,
          "server": "localhost",
          "port": 5432,
          "database": "app",
          "username": "app_user",
          "password": "app_password"
          }
        ],
      }
    }
  },
  "postCreateCommand": "python3 -m pip install -r requirements-dev.txt && pre-commit install",
  "remoteUser": "vscode"
}

The "portsAttributes" field in devcontainer.json tells Codespaces that we're exposing services at those parts, which makes them easy to find in the Ports tab in VS Code.

Screenshot of Ports tab in GitHub Codespaces

Once the app is running, I can click on the URL in the Ports tab and open it in a new window. I can even right-click to change the port visibility, so I can share the URL with classmates or teacher. The URL will only work as long as the Codespace and app are running, but this can be really helpful for quick sharing in class.

Another customization in that devcontainer.json is the addition of the SQLTools extension, for easy browsing of database data. The "sqltools.connection" field sets up everything needed to connect to the local database.

Screenshot of SQLTools extension for browsing a database table

Django + DB

We can use a very similar configuration for Django apps, as demonstrated in my django-quiz-app repository.

By default, Django's built-in security rules are stricter than Flask's, so you may see security errors when using a Django app from the forwarded port's URL, especially when submitting forms. That's because Codespace "local" URLs aren't truly local URLs, and they bake the port into the URL instead of using it as a true port. For example, for a Django app on port 8000, the forwarded URL could be:

https://supreme-space-orbit-64xpgrxxxcwx4-8000.app.github.dev/

To get everything working nicely in Codespaces, we need Django to treat the forwarded URL as a trusted origin. I made that adjustment in settings.py:

ALLOWED_HOSTS = []
CSRF_TRUSTED_ORIGINS = ["http://localhost:8000",]
if env.get_value("CODESPACE_NAME", default=None):
  CSRF_TRUSTED_ORIGINS.append(
   f"https://{env('CODESPACE_NAME')}-8000.{env('GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN')}"
  )

I've run into this with other frameworks as well, so if you ever get a cross-site origin error when running web apps in Codespaces, a similar approach may help you resolve the error.

Teaching Generative AI

For the past two years, a lot of my teaching has been around generative AI models, like large language models and embedding models. Fortunately, there are two ways that we can use Codespaces with those models for free.

GitHub Models

My current favorite approach is to use GitHub Models, which are freely available models for anyone with a GitHub Account. The catch is that they're rate limited, so you can only send a certain number of requests and tokens per day to each model, but you can get a lot of learning done on that limited budget.

To use the models, we can point our favorite Python AI package at the GitHub Models endpoint, and pass in a GitHub Personal Access Token (PAT) as the API key. Fortunately, every Codespace exposes a GITHUB_TOKEN environment variable automatically, so we can just access that directly from the env.

For example, this code uses the OpenAI package to connect to GitHub Models:

import openai

client = openai.OpenAI(
  api_key=os.environ["GITHUB_TOKEN"],
  base_url="https://models.inference.ai.azure.com")

Alternatively, when you are trying out a GitHub Model from the marketplace, select "Use this Model" to get suggested Python code and open a Codespace with code examples.

Screenshot of GitHub Models playground with Use this Model button

For more examples with other frameworks, most from the Python + AI series, check out:


Ollama

My other favorite way to use free generative AI models is Ollama. Ollama is a tool that you can download onto any OS that makes it possible to interact with local language models, especially SLMs (small language models).

On my fairly underpowered Mac M1 laptop, I can run models with up to 8 billion parameters (corresponding to ~5 GB download size). The most powerful LLMs like OpenAI's GPT 4 series typically have a few hundred billion parameters, quite a bit more, but you can get surprisingly good results from smaller models. The Ollama tooling runs a model as efficiently as possible based on the hardware, so it will use a GPU if your machine has one, but otherwise will use various tricks to make the most of the CPU.

Screenshot of Ollama running in terminal

I put together an ollama-python playground repo that makes a Codespace with Ollama already downloaded. All of the configuration is done inside devcontainer.json:

{
  "name": "ollama-python-playground",
  "image": "mcr.microsoft.com/devcontainers/python:3.12-bullseye",
  "features": {
    "ghcr.io/prulloac/devcontainer-features/ollama:1": {}
  },
  "customizations": {
    "vscode": {
      "settings": {
        "python.defaultInterpreterPath": "/usr/local/bin/python"
      },
      "extensions": [
        "ms-python.python"
      ]
    }
  },
  "hostRequirements": {
    "memory": "16gb"
  },
  "remoteUser": "vscode"
}

I could have installed Ollama using a Dockerfile, but instead, inside the "features" section, I added a dev container feature that takes care of installing Ollama for me. Once the Codespace opens, I can immediately run "ollama pull phi3:mini" and start interacting with the model, and also use Python programs to interact with the locally exposed Ollama API endpoints.

You may run into issues running larger SLMs, however, due to the Codespace defaulting to a 4-core machine with only 16 GB of RAM. In that case, you can change the "hostRequirements" to "32gb" or even "64gb" and restart the Codespace. Unfortunately, that will use up your monthly free Codespace hours at double or quadruple the rate.

Generally, making requests to a local Ollama model will be slower than making to GitHub Models, because they're being processed by relatively underpowered machines that do not have GPUs. That's why I start with GitHub models these days, but support using Ollama as a backup, to have as many options possible.

Teaching Data Science

We can also use Codespaces when teaching data science, when class assignments are more likely to use Jupyter notebooks and scientific computing packages.

If you typically set up your data science environment using anacadonda instead of pip, you can use conda inside the Dockerfile, as demonstrated in my colleague's conda-devcontainer-demo:

FROM mcr.microsoft.com/devcontainers/miniconda:0-3

RUN conda install -n base -c conda-forge mamba
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then umask 0002 \
    && /opt/conda/bin/mamba env create -f /tmp/conda-tmp/environment.yml; fi \
    && rm -rf /tmp/conda-tmp

The corresponding devcontainer.json points the Python interpreter path to that conda environment:

{
  "name": "conda-devcontainer-demo",
  "build": { 
    "context": "..",
    "dockerfile": "Dockerfile"
  },
  "postCreateCommand": "conda init",
  "customizations": {
    "vscode": {
      "settings": {
        "python.defaultInterpreterPath": "/opt/conda/envs/demo"
      },
      "extensions": [
        "ms-python.python",
        "ms-toolsai.jupyter",
      ]
    }
  }
}

That configuration includes a "postCreateCommand", which tells Codespace to run "conda init" once everything is loaded in the environment, inside the actual VS Code terminal. There are times when it makes sense to use the lifecycle commands like postCreateCommand instead of running a command in the Dockerfile, depending on what the command does.

The extensions above includes both the Python extension and the Jupyter extension, so that students can get started interacting with Jupyter notebooks immediately. Another helpful extension could be Data Wrangler which adds richer data browsing to Jupyter notebooks and can generate pandas code for you.

If you are working entirely in Jupyter notebooks, then you may want the full JupyterLab experience. In that case, it's actually possible to open a Codespace in JupyterLab instead of the browser-based VS Code.

Disabling GitHub Copilot

As a professional software developer, I'm a big fan of GitHub Copilot to aid my programming productivity. However, in classroom settings, especially in introductory programming courses, you may want to discourage the use of coding assistants like Copilot. Fortunately, you can configure a setting inside the devcontainer.json to disable it, either for all files or specifically for Python:

"github.copilot.enable": {
   "*": true,
   "python": false
}

You could also add that to a .vscode/settings.json so that it would take effect even if the student opened the repository in local VS Code, without using the dev container.

Some classrooms then install their own custom-made extensions that offer more of a TA-like coding assistant, which will help the student debug their code and think through the assignment, but not actually provide the code. Check out the research from CS50 at Harvard and CS61A at UC Berkeley.

Optimizing startup time

When you're first starting up a Codespace for a repository, you might be sitting there waiting for 5-10 minutes, as it builds the Docker image and loads in all the extensions. That's why I often ask students to start loading the Codespace at the very beginning of a lesson, so that it's ready by the time I'm done introducing the topics.

Alternatively, you can use pre-builds to speed up startup time, if you've got the budget for it. Follow the steps to configure a pre-build for the repository, and then Codespace will build the image whenever the repo changes and store it for you. Subsequent startup times will only be a couple minutes. Pre-builds use up free Codespace storage quota more quickly, so you may only want to enable them right before a lesson and disable after. Or, ask if your school can provide more Codespace storage budget.

For additional tips on managing Codespace quotas and getting the most out of the free quotas, read this post by my colleague Alfredo Deza.

Any downsides?

Codespaces is a great way to set up a fully featured environment complete with extensions and services you need in your class. However, there are some drawbacks to using Codespaces in a classroom setting:

  • Saving work: Students need to know how to use git to be able to fork, commit, and push changes. Often students don't know how to use git, or can get easily confused (like all of us!). If your students don't know git, then you might opt to have them download their changed code instead and save or submit it using other mechanisms. Some teachers also build VS Code extensions for submitting work.
  • Losing work: By default, Codespaces only stick around for 30 days, so only changes are lost after then. If a student forgets to save their work, they will lose it entirely. Once again, you may need to give students other approaches for saving their work more frequently.

Additional resources

If you're a teacher in a classroom, you can also take advantage of these programs:

Friday, April 11, 2025

Use any Python AI agent framework with free GitHub Models

I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, where the desire to learn is high but the available cash is low.

That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). So cool! With access to such a range of models, you can prototype complex multi-model workflows to improve your productivity or heck, just make something fun for yourself. 🤗

To use GitHub Models, you can start off in no-code mode: open the playground for a model, send a few requests, tweak the parameters, and check out the answers. When you're ready to write code, select "Use this model". A screen will pop up where you can select a programming language (Python/JavaScript/C#/Java/REST) and select an SDK (which varies depending on model). Then you'll get instructions and code for that model, language, and SDK.

But here's what's really cool about GitHub Models: you can use them with all the popular Python AI frameworks, even if the framework has no specific integration with GitHub Models. How is that possible?

  1. The vast majority of Python AI frameworks support the OpenAI Chat Completions API, since that API became a defacto standard supported by many LLM API providers besides OpenAI itself.
  2. GitHub Models also provide OpenAI-compatible endpoints for chat completion models.
  3. Therefore, any Python AI framework that supports OpenAI-like models can be used with GitHub Models as well. 🎉

To prove my claim, I've made a new repository with examples from eight different Python AI agent packages, all working with GitHub Models: python-ai-agent-frameworks-demos. There are examples for AutoGen, LangGraph, Llamaindex, OpenAI Agents SDK, OpenAI standard SDK, PydanticAI, Semantic Kernel, and SmolAgents. You can open that repository in GitHub Codespaces, install the packages, and get the examples running immediately.

GitHub models plus 8 package names

Now let's walk through the API connection code for GitHub Models for each framework. Even if I missed your favorite framework, I hope my tips here will help you connect any framework to GitHub Models.

OpenAI sdk

I'll start with openai, the package that started it all!

import openai

client = openai.OpenAI(
  api_key=os.environ["GITHUB_TOKEN"],
  base_url="https://models.inference.ai.azure.com")

The code above demonstrates the two key parameters we'll need to configure for all frameworks:

  • api_key: When using OpenAI.com, you pass your OpenAI API key here. When using GitHub Models, you pass in a Personal Access Token (PAT). If you open the repository (or any repository) in GitHub Codespaces, a PAT is already stored in the GITHUB_TOKEN environment variable. However, if you're working locally with GitHub Models, you'll need to generate a PAT yourself and store it. PATs expire after a while, so you need to generate new PATs every so often.
  • base_url: This parameter tells the OpenAI client to send all requests to "https://models.inference.ai.azure.com" instead of the OpenAI.com API servers. That's the domain that hosts the OpenAI-compatible endpoint for GitHub Models, so you'll always pass that domain as the base URL.

If we're working with the new openai-agents SDK, we use very similar code, but we must use the AsyncOpenAI client from openai instead. Lately, Python AI packages are defaulting to async, because it's so much better for performance.

import agents
import openai

client = openai.AsyncOpenAI(
  base_url="https://models.inference.ai.azure.com",
  api_key=os.environ["GITHUB_TOKEN"])

spanish_agent = agents.Agent(
    name="Spanish agent",
    instructions="You only speak Spanish.",
    model=OpenAIChatCompletionsModel(model="gpt-4o", openai_client=client))

PydanticAI

Now let's look at all of the packages that make it really easy for us, by allowing us to directly bring in an instance of either OpenAI or AsyncOpenAI.

For PydanticAI, we configure an AsyncOpenAI client, then construct an OpenAIModel object from PydanticAI, and pass that model to the agent:

import openai
import pydantic_ai
import pydantic_ai.models.openai


client = openai.AsyncOpenAI(
    api_key=os.environ["GITHUB_TOKEN"],
    base_url="https://models.inference.ai.azure.com")

model = pydantic_ai.models.openai.OpenAIModel(
    "gpt-4o", provider=OpenAIProvider(openai_client=client))

spanish_agent = pydantic_ai.Agent(
    model,
    system_prompt="You only speak Spanish.")

Semantic Kernel

For Semantic Kernel, the code is very similar. We configure an AsyncOpenAI client, then construct an OpenAIChatCompletion object from Semantic Kernel, and add that object to the kernel.

import openai
import semantic_kernel.connectors.ai.open_ai
import semantic_kernel.agents

chat_client = openai.AsyncOpenAI(
  api_key=os.environ["GITHUB_TOKEN"],
  base_url="https://models.inference.ai.azure.com")

chat_completion_service = semantic_kernel.connectors.ai.open_ai.OpenAIChatCompletion(
  ai_model_id="gpt-4o",
  async_client=chat_client)

kernel.add_service(chat_completion_service)
  
spanish_agent = semantic_kernel.agents.ChatCompletionAgent(
  kernel=kernel,
  name="Spanish agent"
  instructions="You only speak Spanish")

AutoGen

Next, we'll check out a few frameworks that have their own wrapper of the OpenAI clients, so we won't be using any classes from openai directly.

For AutoGen, we configure both the OpenAI parameters and the model name in the same object, then pass that to each agent:

import autogen_ext.models.openai
import autogen_agentchat.agents

client = autogen_ext.models.openai.OpenAIChatCompletionClient(
  model="gpt-4o",
  api_key=os.environ["GITHUB_TOKEN"],
  base_url="https://models.inference.ai.azure.com")

spanish_agent = autogen_agentchat.agents.AssistantAgent(
    "spanish_agent",
    model_client=client,
    system_message="You only speak Spanish")

LangGraph

For LangGraph, we configure a very similar object, which even has the same parameter names:

import langchain_openai
import langgraph.graph

model = langchain_openai.ChatOpenAI(
  model="gpt-4o",
  api_key=os.environ["GITHUB_TOKEN"],
  base_url="https://models.inference.ai.azure.com", 
)

def call_model(state):
    messages = state["messages"]
    response = model.invoke(messages)
    return {"messages": [response]}

workflow = langgraph.graph.StateGraph(MessagesState)
workflow.add_node("agent", call_model)

SmolAgents

Once again, for SmolAgents, we configure a similar object, though with slightly different parameter names:

import smolagents

model = smolagents.OpenAIServerModel(
  model_id="gpt-4o",
  api_key=os.environ["GITHUB_TOKEN"],
  api_base="https://models.inference.ai.azure.com")
  
agent = smolagents.CodeAgent(model=model)

Llamaindex

I saved Llamaindex for last, as it is the most different. The Llamaindex Python package has a different constructor for OpenAI.com versus OpenAI-like servers, so I opted to use that OpenAILike constructor instead. However, I also needed an embeddings model for my example, and the package doesn't have an OpenAIEmbeddingsLike constructor, so I used the standard OpenAIEmbedding constructor.

import llama_index.embeddings.openai
import llama_index.llms.openai_like
import llama_index.core.agent.workflow

Settings.llm = llama_index.llms.openai_like.OpenAILike(
  model="gpt-4o",
  api_key=os.environ["GITHUB_TOKEN"],
  api_base="https://models.inference.ai.azure.com",
  is_chat_model=True)

Settings.embed_model = llama_index.embeddings.openai.OpenAIEmbedding(
  model="text-embedding-3-small",
  api_key=os.environ["GITHUB_TOKEN"],
  api_base="https://models.inference.ai.azure.com")

agent = llama_index.core.agent.workflow.ReActAgent(
  tools=query_engine_tools,
  llm=Settings.llm)

Choose your models wisely!

In all of the examples above, I specified the "gpt-4o" model. The "gpt-4o" model is a great choice for agents because it supports function calling, and many agent frameworks only work (or work best) with models that natively support function calling.

Fortunately, GitHub Models includes multiple models that support function calling, at least in my basic experiments:

  • gpt-4o
  • gpt-4o-mini
  • o3-mini
  • AI21-Jamba-1.5-Large
  • AI21-Jamba-1.5-Mini
  • Codestral-2501
  • Cohere-command-r
  • Ministral-3B
  • Mistral-Large-2411
  • Mistral-Nemo
  • Mistral-small

You might find that some models work better than others, especially if you're using agents with multiple tools. With GitHub Models, it's very easy to experiment and see for yourself, by simply changing the model name and re-running the code.

So, have you started prototyping AI agents with GitHub Models yet?! Go on, experiment, it's fun!

Tuesday, January 31, 2023

Using Copilot with Python apps

I've been hesitant to try Github Copilot, the "AI pair programmer". Like many developers, I don't want to accidentally use someone's copyrighted code without proper attribution.

Fortunately, Github is continually adding more features to Copilot to make that possibility both rarer and easier to spot. Plus, I'm now on the Cloud advocacy team at Microsoft (Github's parent company), so I keep hearing about the benefits of Copilot from my colleagues. I decided that it was time to try it out! 🤖

I enabled Copilot while developing a Flask + PostgreSQL demo app, and wow, I am a huge fangirl already. 😍 Here's how it helped me out:

Writing ORM queries

My app uses SQLAlchemy, a popular package that's been through a few iterations. I've only used SQLAlchemy a few times, so I often find myself unsure how to form the correct ORM queries. I'm much better at SQL than SQLAlchemy, as it turns out. Fortunately, Copilot has seen enough examples of SQLAlchemy queries that it was able to form them for me.

Copilot wrote the queries after I provided the route function header and variable names. It's worth noting my models.py file already existed at this point.

@bp.route("/surveys/<int:survey_id>", methods=["GET"])
def survey_page(survey_id):
    survey = Survey.query.where(Survey.id == survey_id).first()
    answers = Survey.query.where(Answer.survey == survey_id)

Yes, those are pretty straightforward queries, but it still would have taken me a web search first to remember the SQLAlchemy ORM methods. It also was able to write queries with filters, especially if I wrote the comment first:

# Count matching answers in the database
answer_count = session.query(models.Answer).filter_by(
    selected_option="strawberry").count()

Would I have learned more of the SQLAlchemy API had I written those queries myself? Yes, probably, but 1) I don't know how long that knowledge would have lasted, given I bounce between multiple ORMs across projects, and 2) we're at the point of web development where there are too many APIs in play to memorize, and our time can be spent on gluing together apps.

Of course, we need to make sure these queries work! That brings me to my favorite use of Copilot...

Writing tests

My app uses Pytest to test the routes and models. I started by creating a test_routes.py file with this comment at the top:

# Test the routes in app.py using pytest

Copilot immediately took care of the imports for me. Interestingly, I didn't need pytest imported at first, since it's only necessary if you define fixtures or use other special features, but I did end up writing a few fixtures later.

import pytest

from app import app

Now I wrote the signature for the first test:

def test_index_redirect():

My goal was to test this route, whose code was already written:

@bp.route("/", methods=["GET"])
def index():
    return redirect(url_for("surveys.surveys_list_page"))

Copilot filled in the rest of the code:

with app.test_client() as client:
        resp = client.get("/")
        assert resp.status_code == 302
        assert resp.location == "http://localhost/surveys"

I ran the tests then, and discovered only one issue (which I suspected when I saw the suggested code): the location needed to be a relative URL, just "/surveys". I was very happy to have this test written, as I'm relatively new to Pytest and had already forgetten how to write Pytest tests against a Flask app. If Copilot hadn't written it, then I would have dug up a similar app of mine and adapted those tests.

For my next test, I wrote this function signature and comment:

def test_surveys_create_handler(client):
    # Test the create handler by sending a POST request with form data

Copilot filled in the next line, complete with a fake survey question. That's part of what makes Copilot particularly great for tests, it loves making up fake data. 😆

resp = client.post("/surveys", data={
        "survey_question": "What's your favorite color?",
        "survey_topic": "colors",
        "survey_options": "red\nblue\nyellow"})

For the rest of the tests, my general approach was to write a function signature, write comments for the stages of the test, and let Copilot fill in the rest. You can see many of those comments still, in my test_routes.py file. The only place it flailed was properly setting a cookie in Flask, so that was something I had to research myself. At some point, I refactored the common app.test_client() into a test fixture, since so many tests used it. Copilot may not always be the DRYest! 💦

Reflections

I like Copilot because it really is like a pair programmer, except without the feeling of being watched (which is uncomfortable for me, personally). There's also no judgment. I sometimes will correct a Copilot suggestion, run the tests, and then realize Copilot was actually right. I'm more amused than embarrassed when that happens, since I know Copilot really doesn't care at all.

I also don't feel like Copilot has copied the code of any particular developer out there, in the suggestions that it gave me. What I saw instead is a model that has seen a lot of similar code, and also has seen the code inside my project folder, and it's able to match those patterns together. The suggestions felt like the results of a StackOverflow search, but quicker and personalized.

I think it's interesting that using Copilot really encourages the writing of comments, and I wonder if this will lead to a future where code is more commented, because people leave in the comments they used to prompt the suggestions. I often strip mine out before committing, but not always. I also wonder if comments-first code writing will generally lead to people coding faster, because we will first think through our ideas in an abstracted sense (using English) and then implement them with syntactic constraints (code). I suspect that coding is actually easier when we describe it in our natural language first.

Those are my musings. I'd love to hear about your experimentation and how you think it will affect the future of coding.

Monday, August 8, 2022

Porting a project from spaces to tabs

I'm currently working on a JavaScript codebase that has some old crusty code, and I'm modernizing it in various ways, like upgrading to ES6 syntax and linting with ESLint. I also like to add in Prettier to every codebase, as an automated step, so that my code is always consistently formatted, and so that future pull requests from other developers can easily follow the same conventions.

But I had a dilemma: half my code was written with 2 space indents, the other half was written with 4 space indents, and I needed to tell Prettier what to use. What's a girl to do?? Well, I considered averaging it for nice 3-space indents everywhere (I kid, I kid), but I instead made a radical decision: just use tabs! I'd heard that Prettier is considering making tabs the default anyway, and after reading the many comments on their PR thread, I became convinced that tabs are better than spaces, at least for an autoformatted project.

Since my projects and editors have used spaces forever, there were a few things I needed to do in order to smoothly move over to tabs. Here's the steps I took:

  • Reformat files to use tabs. To change all my current files to tabs, I used Prettier. First I configured it by specifying "useTabs" in my .prettierrc.json:

    {
    	"useTabs": true
    }
    

    Then I ran the prettier command on all my JS/JSON files:

    prettier \"**/*.{js,json}\" --ignore-path ./.eslintignore --write
          
  • Ignore the reformat commit in git blame. I really hate when reformatting commits make it harder to use git blame to track logical changes, so I was thrilled to discover that there's a way for Git/Github to ignore particular revisions while blaming. I followed this blog post, adding a .git-blame-ignore-revs with my most recent commit:

    # Reformat js/json with Prettier, spaces to tabs
    a08f09aa7c4e9381ae2036754bd9311e78c3b40f
    

    Then I ran a command to tell my local git to ignore the revision:
    git config blame.ignoreRevsFile .git-blame-ignore-revs

    Once I pushed the commit with that file, I saw that Github does indeed ignore changes from that commit when I use the blame feature. So cool! Screenshot from Github blame UI

  • Make Github render tabs using 4 spaces.For whatever reason, Github defaults to 8 spaces for tabs, and that is too dang much. To make Github render the tabs in my projects with just 4 spaces, I added an .editorconfig file to my project:

    root = true
    
    [*]
    indent_style = tab
    indent_size = 4
    

    Github also allows users to customize tabs across all project repositories, and that user setting takes precedence over the per-project .editorconfig setting. That's likely for accessibility reasons, since some folks might require a large number of spaces for better readability. To change my account preference, I opened up Settings > Appearance and selected my desired number of spaces:

    Screenshot of Github settings

    So, if I visit my project in an incognito window, Github will render the tabs with 4 spaces, but if I visit the project from my logged in browser, Github will render the tab with 2 spaces.

  • Make VS Code insert tabs when I tab. VS Code tries to adjust its indentation style with autodetection based on the current file, but I wanted to make sure it always inserted a tab in new files in my project, too. It defaults to inserting spaces when it isn't sure, so I needed to explicitly override that setting. I could have changed the setting across all projects, but most of my other projects use spaces, so I instead figured out how to change it in just this project for now.

    To change it, I opened up Settings > Workspace, searched for "insert spaces", and un-checked the "Editor: Insert spaces" setting. That created a .vscode/settings.json file with an "editor.insertSpaces" property:

    {
    	"editor.insertSpaces": false
    }
    

    Another option for VS Code is to use a VS Code plugin that understands .editorconfig files. If you go that route, you don't need to finagle with the VS Code settings yourself.