Tuesday, January 31, 2023

Using Copilot with Python apps

I've been hesitant to try Github Copilot, the "AI pair programmer". Like many developers, I don't want to accidentally use someone's copyrighted code without proper attribution.

Fortunately, Github is continually adding more features to Copilot to make that possibility both rarer and easier to spot. Plus, I'm now on the Cloud advocacy team at Microsoft (Github's parent company), so I keep hearing about the benefits of Copilot from my colleagues. I decided that it was time to try it out! 🤖

I enabled Copilot while developing a Flask + PostgreSQL demo app, and wow, I am a huge fangirl already. 😍 Here's how it helped me out:

Writing ORM queries

My app uses SQLAlchemy, a popular package that's been through a few iterations. I've only used SQLAlchemy a few times, so I often find myself unsure how to form the correct ORM queries. I'm much better at SQL than SQLAlchemy, as it turns out. Fortunately, Copilot has seen enough examples of SQLAlchemy queries that it was able to form them for me.

Copilot wrote the queries after I provided the route function header and variable names. It's worth noting my models.py file already existed at this point.

@bp.route("/surveys/<int:survey_id>", methods=["GET"])
def survey_page(survey_id):
    survey = Survey.query.where(Survey.id == survey_id).first()
    answers = Survey.query.where(Answer.survey == survey_id)

Yes, those are pretty straightforward queries, but it still would have taken me a web search first to remember the SQLAlchemy ORM methods. It also was able to write queries with filters, especially if I wrote the comment first:

# Count matching answers in the database
answer_count = session.query(models.Answer).filter_by(
    selected_option="strawberry").count()

Would I have learned more of the SQLAlchemy API had I written those queries myself? Yes, probably, but 1) I don't know how long that knowledge would have lasted, given I bounce between multiple ORMs across projects, and 2) we're at the point of web development where there are too many APIs in play to memorize, and our time can be spent on gluing together apps.

Of course, we need to make sure these queries work! That brings me to my favorite use of Copilot...

Writing tests

My app uses Pytest to test the routes and models. I started by creating a test_routes.py file with this comment at the top:

# Test the routes in app.py using pytest

Copilot immediately took care of the imports for me. Interestingly, I didn't need pytest imported at first, since it's only necessary if you define fixtures or use other special features, but I did end up writing a few fixtures later.

import pytest

from app import app

Now I wrote the signature for the first test:

def test_index_redirect():

My goal was to test this route, whose code was already written:

@bp.route("/", methods=["GET"])
def index():
    return redirect(url_for("surveys.surveys_list_page"))

Copilot filled in the rest of the code:

with app.test_client() as client:
        resp = client.get("/")
        assert resp.status_code == 302
        assert resp.location == "http://localhost/surveys"

I ran the tests then, and discovered only one issue (which I suspected when I saw the suggested code): the location needed to be a relative URL, just "/surveys". I was very happy to have this test written, as I'm relatively new to Pytest and had already forgetten how to write Pytest tests against a Flask app. If Copilot hadn't written it, then I would have dug up a similar app of mine and adapted those tests.

For my next test, I wrote this function signature and comment:

def test_surveys_create_handler(client):
    # Test the create handler by sending a POST request with form data

Copilot filled in the next line, complete with a fake survey question. That's part of what makes Copilot particularly great for tests, it loves making up fake data. 😆

resp = client.post("/surveys", data={
        "survey_question": "What's your favorite color?",
        "survey_topic": "colors",
        "survey_options": "red\nblue\nyellow"})

For the rest of the tests, my general approach was to write a function signature, write comments for the stages of the test, and let Copilot fill in the rest. You can see many of those comments still, in my test_routes.py file. The only place it flailed was properly setting a cookie in Flask, so that was something I had to research myself. At some point, I refactored the common app.test_client() into a test fixture, since so many tests used it. Copilot may not always be the DRYest! 💦

Reflections

I like Copilot because it really is like a pair programmer, except without the feeling of being watched (which is uncomfortable for me, personally). There's also no judgment. I sometimes will correct a Copilot suggestion, run the tests, and then realize Copilot was actually right. I'm more amused than embarrassed when that happens, since I know Copilot really doesn't care at all.

I also don't feel like Copilot has copied the code of any particular developer out there, in the suggestions that it gave me. What I saw instead is a model that has seen a lot of similar code, and also has seen the code inside my project folder, and it's able to match those patterns together. The suggestions felt like the results of a StackOverflow search, but quicker and personalized.

I think it's interesting that using Copilot really encourages the writing of comments, and I wonder if this will lead to a future where code is more commented, because people leave in the comments they used to prompt the suggestions. I often strip mine out before committing, but not always. I also wonder if comments-first code writing will generally lead to people coding faster, because we will first think through our ideas in an abstracted sense (using English) and then implement them with syntactic constraints (code). I suspect that coding is actually easier when we describe it in our natural language first.

Those are my musings. I'd love to hear about your experimentation and how you think it will affect the future of coding.

Friday, January 13, 2023

Tips for writing Bicep files for Azure deployment

In the last few months, I've gotten very into the world of deploying apps to Azure using Bicep files, and I probably spend more time writing Bicep than Python lately. Bicep is a declarative language for describing the Azure resources that should be provisioned. It's similar to Terraform, but is specific to Azure.

I love the end result of Bicep: a highly replicable deploy to multiple interdependent resources with custom configurations. However, the journey isn't always so fun, since it's often not clear how to write Bicep to achieve the desired result. I'm writing up my tips for Bicep file writing to help others. Let me know if you have any to add!


Install the Bicep extension

I use VS Code when writing Bicep files (since I use VS Code for nearly everything these days), and it fortunately has a Bicep extension. The extension will add syntax highlighting to your files, point out in real-time when something isn't correct, and it even includes a snippet feature (which I haven't used personally).

Screenshot of Bicep extension erroring on dead code

Open the Bicep reference

I don't recommend the Bicep reference for leisure reading, but it's always good to check the authoritative source. I typically find the right page by searching the web for a specific resource, like "Azure Bicep reference Key Vault" or if I know it, referencing the namespace "Azure Bicep reference Microsoft.Web/sites", which brings me to this page. Each reference page has an example with all the properties, then descriptions for each property, and at the very botton, a bunch of examples available in both ARM/Bicep.


Find related samples

My general approach for programming is to find something that's as close as possible to what I want to do, and then tweak it from there. My favorite source for samples is the AZD Templates Gallery, since you can filter by language, framework, or Azure resource. Use the AND/OR toggle in the top right to change how the filtering works. Once you find an appropriate sample, find the infra folder in the Github repo and see how it's written. Another source of samples is the Bicep reference, as I just mentioned.


Search Github for examples

If I still can't find a sample that shows what I'm looking for, I turn to Github code search. There's a fancy new code search in beta now, but the old code search is good enough for our purposes. I typically search for either a resource namespace or a particular configuration property name, and then I add on either "lang:bicep" or "path:bicep". The "lang" searches for files tagged as the Bicep language, while path searches for files that contain bicep in the path. Probably "lang:bicep" is the more proper filter to use. So my full search might be "Microsoft.Web/sites lang:bicep" or "appCommandLine lang:bicep". Keep in mind that you may find samples using deprecated settings, but you can cross-check with the reference to make sure they're modern.

Screenshot of Github code search for lang:bicep

Generate with NubesGen

NubesGen is a tool built by some of my Cloud Advocacy teammates. You tick a few boxes describing the app you're trying to make, and NubesGen will create the configuration files for you. It doesn't include every Azure resource ever, but it does include the most commonly used services. Since it's open source, so you can file requests or submit pull requests to add more options. You can also browse out its Bicep templates to see how it builds out the final Bicep files.


Create the resource manually

Sometimes I'm trying to Bicep-ify an app I've already deployed another way, like via the VS Code extension or AZ CLI commands. In that case, I open the existing resources in the Azure Portal and either:

  • Select "JSON view" from a specific resource's Overview page, and search through that JSON for the setting I'm interested in.
  • Use "Export template" on either the entire resource group or a particular resource, which downloads an ARM JSON file. I can search through that JSON or decompile it into a Bicep. This is often my last resort, because exporting the ARM for an entire resource group takes a decent amount of time and often creates a bunch of unnecessary ARM cruft. It is not the minimal configuration needed to create that resource (more like the maximal!). It's a good tool to have in the toolbox though, just in case.

I will also often create a single resource from scratch in the Azure Portal, like a PostgreSQL server, just so I can see the default configuration. That way, I know what I actually need to override in my own Bicep file, and what I can leave out, if I'm happy with the default value.

Ask the mailing list

If you are a Microsoft employee, there's an "ARM-Bicep Deployment Experts" mailing list which fields lots of questions each week. You can sort through the archives or ask your question if it's never been asked. If you're not a MS employee, then you can post on StackOverflow and add the azure-bicep tag.

Wednesday, January 4, 2023

Tips for debugging Django deployments to Azure App Service

I've been working on deploying various Django apps to Azure App Service this week and have run into a few issues since I'm customizing the app configuration quite a bit. Both as a reminder for myself and a resource for others, I figured I'd write up my tips for debugging Django app deployments and App Service.

After you finish deploying, first visit the app URL to see if it loads. If it does, amazing! If it doesn't, here are steps you can take to figure out what went wrong.

Check the deployment logs

Select Deployment Center from the side navigation menu, then select Logs. You should see a timestamped list of recent deploys:

Check whether the status of the most recent deploy is "Success (Active)" or "Failed". If it's success, the deployment logs might still reveal issues, and if it's failed, the logs should certainly reveal the issue.

Click the commit ID to open the logs for the most recent deploy. First scroll down to see if any errors or warnings are reported at the end. This is what you'll hopefully see if all went well:

Now scroll back up to find the timestamp with the label "Running oryx build". Oryx is the open source tool that builds apps for App Service, Functions, and other platforms, across all the supported MS languages. Click the Show logs link next to that label. That will pop open detailed logs at the bottom. Scroll down.

Here's what a successful Oryx build looks like for a Django application:


Command: oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform python --platform-version 3.10 -p virtualenv_name=antenv --log-file /tmp/build-debug.log  -i /tmp/8dad4af021551b0 --compress-destination-dir | tee /tmp/oryx-build.log
Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
You can report issues at https://github.com/Microsoft/Oryx/issues

Oryx Version: 0.2.20220825.1, Commit: 24032445dbf7bf6ef068688f1b123a7144453b7f, ReleaseTagName: 20220825.1

Build Operation ID: |BrSlkWT7Cgo=.afae917a_
Repository Commit : 0aed8982-e84a-4573-9ed8-82d46e9056c4

Detecting platforms...
Detected following platforms:
  python: 3.10.8

Using intermediate directory '/tmp/8dad4af021551b0'.

Copying files to the intermediate directory...
Done in 0 sec(s).

Source directory     : /tmp/8dad4af021551b0
Destination directory: /home/site/wwwroot

Python Version: /tmp/oryx/platforms/python/3.10.8/bin/python3.10
Creating directory for command manifest file if it does not exist
Removing existing manifest file
Python Virtual Environment: antenv
Creating virtual environment...
Activating virtual environment...
Running pip install...
[21:49:24+0000] Collecting Django==4.1.1
[21:49:24+0000]   Using cached Django-4.1.1-py3-none-any.whl (8.1 MB)
[21:49:25+0000] Collecting psycopg2
[21:49:25+0000]   Using cached psycopg2-2.9.5-cp310-cp310-linux_x86_64.whl
[21:49:25+0000] Collecting python-dotenv
[21:49:25+0000]   Using cached python_dotenv-0.21.0-py3-none-any.whl (18 kB)
[21:49:25+0000] Collecting whitenoise[brotli]
[21:49:25+0000]   Using cached whitenoise-6.2.0-py3-none-any.whl (19 kB)
[21:49:25+0000] Collecting asgiref<4,>=3.5.2
[21:49:26+0000]   Using cached asgiref-3.5.2-py3-none-any.whl (22 kB)
[21:49:26+0000] Collecting sqlparse>=0.2.2
[21:49:26+0000]   Using cached sqlparse-0.4.3-py3-none-any.whl (42 kB)
[21:49:26+0000] Collecting Brotli
[21:49:26+0000]   Using cached Brotli-1.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.7 MB)
[21:49:27+0000] Installing collected packages: Brotli, whitenoise, sqlparse, python-dotenv, psycopg2, asgiref, Django
[21:49:38+0000] Successfully installed Brotli-1.0.9 Django-4.1.1 asgiref-3.5.2 psycopg2-2.9.5 python-dotenv-0.21.0 sqlparse-0.4.3 whitenoise-6.2.0

[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: pip install --upgrade pip

Content in source directory is a Django app
Running collectstatic...

132 static files copied to '/tmp/8dad4af021551b0/staticfiles', 643 post-processed.
Done in 21 sec(s).
Not a vso image, so not writing build commands
Preparing output...

Copying files to destination directory '/tmp/_preCompressedDestinationDir'...
Done in 22 sec(s).
Compressing content of directory '/tmp/_preCompressedDestinationDir'...
Copied the compressed output to '/home/site/wwwroot'

Removing existing manifest file
Creating a manifest file...
Manifest file created.
Copying .ostype to manifest output directory.

Done in 103 sec(s).

Look for these important steps in the Oryx build:

  • Detected following platforms: python: 3.10.8
    That should match your runtime in the App Service configuration.
  • Running pip install...
    That should install all the requirements in your requirements.txt - if it didn't find your requirements.txt, then you won't see the packages installed.
  • Content in source directory is a Django app
    This message means that Oryx detected your app is a Django app, based on the presence of Django in requirements.txt, and will now run the next step.
  • Running collectstatic... This step is running `manage.py collectstatic`, a necessary step for most Django apps. It can be disabled via the DISABLE_COLLECTSTATIC environment variable if desired.

If you see all those steps in the Oryx build, then that's a good sign that the build went well, and you can move on to checking the App Service logs.

Check the log stream

Under the Monitoring section of the side nav, select Log stream. Scroll to the timestamp corresponding to your most recent deploy.

The logs should start with pulling Docker images:

Here are the full logs for a Django app successfully starting in an App Service container:


2023-01-04T18:20:46.723Z INFO  - 3.10_20221128.12.tuxprod Pulling from appsvc/python
2023-01-04T18:20:46.731Z INFO  -  Digest: sha256:03576ed3edbc1bd69db8826f2cb5bbbdcfd483da4dab4f51f5050972491afef3
2023-01-04T18:20:46.733Z INFO  -  Status: Image is up to date for mcr.microsoft.com/appsvc/python:3.10_20221128.12.tuxprod
2023-01-04T18:20:46.760Z INFO  - Pull Image successful, Time taken: 0 Minutes and 0 Seconds
2023-01-04T18:20:47.273Z INFO  - Starting container for site
2023-01-04T18:20:47.285Z INFO  - docker run -d --expose=8000 --name djangocc-vssonnwxjqvw2-app-service_2_4555c318 -e WEBSITE_SITE_NAME=djangocc-vssonnwxjqvw2-app-service -e WEBSITE_AUTH_ENABLED=False -e PORT=8000 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=djangocc-vssonnwxjqvw2-app-service.azurewebsites.net -e WEBSITE_INSTANCE_ID=37c77572be775c3da9e718e75bdd3dcd6c69246656e6f77972070ce4bfef6455 -e HTTP_LOGGING_ENABLED=1 -e WEBSITE_USE_DIAGNOSTIC_SERVER=True appsvc/python:3.10_20221128.12.tuxprod python manage.py migrate && gunicorn --workers 2 --threads 4 --timeout 60 --access-logfile '-' --error-logfile '-' --bind=0.0.0.0:8000 --chdir=/home/site/wwwroot config.wsgi
2023-01-04T18:21:01.494Z INFO  - Initiating warmup request to container djangocc-vssonnwxjqvw2-app-service_2_4555c318_msiProxy for site djangocc-vssonnwxjqvw2-app-service
2023-01-04T18:21:01.524Z INFO  - Container djangocc-vssonnwxjqvw2-app-service_2_4555c318_msiProxy for site djangocc-vssonnwxjqvw2-app-service initialized successfully and is ready to serve requests.
2023-01-04T18:21:01.531Z INFO  - Initiating warmup request to container djangocc-vssonnwxjqvw2-app-service_2_4555c318 for site djangocc-vssonnwxjqvw2-app-service
2023-01-04T18:21:17.476Z INFO  - Waiting for response to warmup request for container djangocc-vssonnwxjqvw2-app-service_2_4555c318. Elapsed time = 16.0211255 sec
2023-01-04T18:21:33.263Z INFO  - Waiting for response to warmup request for container djangocc-vssonnwxjqvw2-app-service_2_4555c318. Elapsed time = 31.8082536 sec
2023-01-04T18:21:37.548Z INFO  - Container djangocc-vssonnwxjqvw2-app-service_2_4555c318 for site djangocc-vssonnwxjqvw2-app-service initialized successfully and is ready to serve requests.
2023-01-04T18:21:00.181237975Z    _____
2023-01-04T18:21:00.181266077Z   /  _  \ __________ _________   ____
2023-01-04T18:21:00.181271277Z  /  /_\  \\___   /  |  \_  __ \_/ __ \
2023-01-04T18:21:00.181274977Z /    |    \/    /|  |  /|  | \/\  ___/
2023-01-04T18:21:00.181278277Z \____|__  /_____ \____/ |__|    \___  >
2023-01-04T18:21:00.181281678Z         \/      \/                  \/
2023-01-04T18:21:00.181284978Z A P P   S E R V I C E   O N   L I N U X
2023-01-04T18:21:00.181288278Z
2023-01-04T18:21:00.181291278Z Documentation: http://aka.ms/webapp-linux
2023-01-04T18:21:00.181294478Z Python 3.10.4
2023-01-04T18:21:00.181297678Z Note: Any data outside '/home' is not persisted
2023-01-04T18:21:01.465437465Z Starting OpenBSD Secure Shell server: sshd.
2023-01-04T18:21:01.683907061Z Site's appCommandLine: python manage.py migrate && gunicorn --workers 2 --threads 4 --timeout 60 --access-logfile '-' --error-logfile '-' --bind=0.0.0.0:8000 --chdir=/home/site/wwwroot config.wsgi
2023-01-04T18:21:02.113255325Z Starting periodic command scheduler: cron.
2023-01-04T18:21:02.115641859Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'python manage.py migrate && gunicorn --workers 2 --threads 4 --timeout 60 --access-logfile '-' --error-logfile '-' --bind=0.0.0.0:8000 --chdir=/home/site/wwwroot config.wsgi'
2023-01-04T18:21:02.513265532Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2023-01-04T18:21:02.518158007Z Build Operation ID: |83xYwqpdGr0=.fe7a9527_
2023-01-04T18:21:02.519884905Z Oryx Version: 0.2.20220825.1, Commit: 24032445dbf7bf6ef068688f1b123a7144453b7f, ReleaseTagName: 20220825.1
2023-01-04T18:21:02.520450236Z Output is compressed. Extracting it...
2023-01-04T18:21:02.533109549Z Extracting '/home/site/wwwroot/output.tar.gz' to directory '/tmp/8daee7f6ddb01b5'...
2023-01-04T18:21:22.545602410Z App path is set to '/tmp/8daee7f6ddb01b5'
2023-01-04T18:21:22.611376378Z Writing output script to '/opt/startup/startup.sh'
2023-01-04T18:21:24.124335826Z Using packages from virtual environment antenv located at /tmp/8daee7f6ddb01b5/antenv.
2023-01-04T18:21:24.125478895Z Updated PYTHONPATH to '/opt/startup/app_logs:/tmp/8daee7f6ddb01b5/antenv/lib/python3.10/site-packages'
2023-01-04T18:21:31.321664095Z Operations to perform:
2023-01-04T18:21:31.321750300Z   Apply all migrations: account, admin, auth, contenttypes, sessions, sites, socialaccount, users
2023-01-04T18:21:31.321758001Z Running migrations:
2023-01-04T18:21:31.321762401Z   No migrations to apply.
2023-01-04T18:21:33.977904663Z [2023-01-04 18:21:33 +0000] [80] [INFO] Starting gunicorn 20.1.0
2023-01-04T18:21:33.986985797Z [2023-01-04 18:21:33 +0000] [80] [INFO] Listening at: http://0.0.0.0:8000 (80)
2023-01-04T18:21:33.987019399Z [2023-01-04 18:21:33 +0000] [80] [INFO] Using worker: gthread
2023-01-04T18:21:34.016193616Z [2023-01-04 18:21:34 +0000] [81] [INFO] Booting worker with pid: 81
2023-01-04T18:21:34.093535167Z [2023-01-04 18:21:34 +0000] [82] [INFO] Booting worker with pid: 82
2023-01-04T18:30:36.727907506Z djangocc-vssonnwxjqvw2-app-service : [200ef84a-9498-4d8b-8023-11edbfe8251f] Incoming request on /healthcheck?api-version=2021-08-01
2023-01-04T18:30:36.740159926Z djangocc-vssonnwxjqvw2-app-service : [200ef84a-9498-4d8b-8023-11edbfe8251f] Request to TokenService: Endpoint 169.254.129.14:8081, Port 8081, Path /healthcheck, Query ?api-version=2021-08-01, Method GET, UserAgent HealthCheck/1.0
2023-01-04T18:30:37.201139217Z djangocc-vssonnwxjqvw2-app-service : [200ef84a-9498-4d8b-8023-11edbfe8251f] Returning response for Site , Endpoint 169.254.129.14:8081, Port 8081, Path /healthcheck, Method GET, Result = 200

A few notable logs:

  • 2023-01-04T18:21:01.683907061Z Site's appCommandLine: python manage.py migrate && gunicorn --workers 2 --threads 4 --timeout 60 --access-logfile '-' --error-logfile '-' --bind=0.0.0.0:8000 --chdir=/home/site/wwwroot config.wsgi
    This log shows up because my app overrides the app's startup command (`appCommandLine` in a Bicep file). If your app doesn't override the startup command, then Oryx will generate a startup script for you that calls `gunicorn` to start the server. The default startup script does not call manage.py migrate, so that's why my app overrides it.
  • 2023-01-04T18:21:31.321758001Z Running migrations:
    That proves that the container did indeed all `manage.py migrate` from my custom startup script.
  • 2023-01-04T18:21:33.977904663Z [2023-01-04 18:21:33 +0000] [80] [INFO] Starting gunicorn 20.1.0
    That's the start of the gunicorn server serving the Django app. After it starts, the logs should show HTTP requests.

If you aren't seeing the full logs, it's possible that your deploy happened too long ago and the portal has deleted some logs. In that case, open the Log stream and do another deploy, and you should see the full logs.

Alternatively, you can download the full logs from the Kudu interface. Select Advanced Tools from the side nav:

Screenshot of Azure Portal side nav with Advanced Tools selected

When the Kudu website loads, find the Current Docker Logs link and select Download as zip next to it:

Screenshot of website list of links

In the downloaded zip file, find the filename that starts with the most recent date and ends with "_default_docker.log":

Screenshot of extracted zip folder with files

Open that file to see the full logs, with the most recent logs at the bottom.