Saturday, July 27, 2013

My Year at Coursera

Just over a year ago, I realized that my passion in life is education and announced here that I would be joining Coursera as a frontend engineer to help them evolve online university-level education. It's been an incredible year, one where I've learnt a massive amount about frontend engineering, startup culture, and online education.

I've made the difficult decision to move on from Coursera engineering for a different role in the education space, but I really appreciate the time that I had at Coursera and I want to take a moment to look back on it here:

The Projects

I got the opportunity to both make new parts of our interface and re-think existing bits, both from the technology and usability point of view:

  • Made the first team page (screenshot)
  • Rewrote the course catalog and frontpage (screenshot)
  • Wrote the interface and APIs for our social profiles (blog post)
  • Improved the messaging of quiz deadlines to reduce student confusion
  • Re-wrote our Django admin in Backbone/APIs instead (blog post)
  • Ported our legacy codebase to Bootstrap 2 (blog post)
  • Re-wrote our forums in Backbone/APIs instead (blog post)
  • Coded the feature detection and half of the sign-up process for Signature Track (blog post)
  • Wrote an iCal feed for deadlines in classes (blog post)
  • Made a way for professors to connect forums to lectures
  • Improved the performance of our frontend (blog post)
  • Created a reporter wizard for students to get help and report bugs
  • Wrote hundreds of tests for the frontend (blog post), Django APIs, and legacy PHP code (blog post)

The Technology

I found myself in a stack of many technologies that I'd heard of but never used on a real project, so this was a great opportunity to learn them on a deeper level:

  • Frontend tools: Backbone, Underscore, Jade, Stylus
  • 3rd party APIs: Transloadit, Badgeville, Google Maps Places API
  • Languages: Python, PHP, Scala
  • Web frameworks: Django, Play
  • Amazon Services: S3, CloudFront, RDS, EC2, CloudSearch
  • MySQL and phpMyAdmin

The Culture

We went from 15 to 50 people over the course of the year, and during that time, we cultivated our own unique Coursera culture, one of fun and learning:

  • We came up with fun Formal Friday themes, when we got bored of the formal thing - my favorite was when everyone dyed their hair pink to match mine.
  • We started a Show & Tell on Mondays, and are still doing it now, with people across the whole company sharing their hacks, their progress, even their family vacation photos.
  • We ran in the MudWarriors challenge together and practiced our handstands in the hallways.
  • We started an underground Twitter account.
  • We invited startup founders, engineers, and our favorite Coursera professors to give tech talks about their area of expertise, always with lively discussions after.

The People

The Coursera team is a bunch of the smartest and most passionate people I know. Many of them used to be teachers or TAs, they come from all over the world, and they're all motivated by the mission to improve education for everyone. They're also a whole lot of fun. :-)

If you want to join Coursera, they are pretty much always hiring, join them!

...And stay tuned to find out about my next adventure.

Rewriting Django Admin in Backbone


This is an internal guide I wrote for Coursera about our site admin app, which is a Backbone "port" of Django admin. I've snapshotted it here in case it's interesting to other folks that are using Django admin and going down the same route, or just generally thinking about making an admin interface in Backbone.

A Bit of History

When Coursera first began advertising courses on, the process to add a new course was quite manual: our Course Ops team would work with the university admins to draft up HTML describing the course, engineers would paste that HTML into the DB, and make edits to it as requested. Or, that's how the lore goes.

Here's how that data might look on the site:

As you can imagine, that process doesn't scale: it took unnecessary engineering honors to make incremental edits to the course descriptions, and it did not please university admins to have to wait on such a slow cycle. Ideally, they could log in, edit the description, and see the changes live, within a matter of minutes.

So we set about making an admin interface for the data. Since runs off a Python/Django/MySQL backend, we first went down the expected route: Django Admin, an app that's built into Django for easy editing of database tables, complete with different permission levels, edit logs, and an extensibility mechanism.

Here's what it looked like on our data:

Django Admin was easy to set up and skin, but when we started implementing requests from Course Ops to improve the editing experience and workflow, we found we had to fight against Django Admin, to hack into its core in ways that felt wrong. We wanted a lot of different ways of editing data, which we could do via custom Django Admin widgets, but that often meant writing HTML and JavaScript into our Python code itself. We also wanted different buttons in our forms depending on the state of the model, like "open a session", and to do that, we had to modify the base templates with hacky HTML and javascript.

You can see a few of our widgets and buttons here:

We were making it work, sure, but I wasn't sure how long we could keep making it work, and if I could bring myself to look at the codebase later, knowing how we'd contorted it to meet our needs. Once I realized that I was the likely engineer to be making bug fixes and implementing feature requests, I decided it was time to move away, fast, before we got too deep into it.

Site Admin

So, over the course of the 3-day Labor Day weekend, I wrote "site admin", an admin interface that used our new approach to writing frontends, with a Backbone frontend and a RESTful Django API. It wasn't feature-complete with Django admin after those 3 days (and it still isn't), but it was built to be extendible on the client-side in a way that Django Admin was not, and that has made it much easier to build on.

Let's walk through the API and the frontend.


The site admin API is designed to be exactly the sort of API that Backbone expects, a RESTful JSON API. It's based off an open-source project called Djangbone, but is now heavily modified.

In admin_api/, the RestrictedAdminAPIView extends the generic Django View class, and defines get/post/put/delete methods that respond to the appropriate HTTP verb and understand how to generically fetch/create/edit/delete any model/collection. That file also contains AdminLogsView, which handles creation and retrieval of edit logs, and AdminSearchView, which is used by autocompletes in the frontend for finding models.

To set up an API for a particular set of models, we'd follow these steps, using the categories app and Category model as an example:

  • Update categories/ We add a static method to a Model class that returns back all of the models that the given user is allowed to administer.
    class Category(models.Model):
        def objects_administerd_by_user(user):
            if user.is_superuser:
                return Category.objects.all()
              return Category.objects.none()
  • Create categories/ In it, we create a new view that extends RestrictedAdminAPIView, where we specify the base_queryset and base_model, corresponding to our Model, and we provide a list of fields that should be serialized into the JSON (we do not want to pass down all fields, particularly in the case of models with related fields, like course students). We also define get_add_form and get_edit_form, which return a Django Form subclass based on the request. We often serve different forms to different users, like when we want looser field validation for super users versus university admins.
    class CategoryAdminAPIView(RestrictedAdminAPIView):
        base_queryset = Category.objects.all()
        base_model = Category
        serialize_fields = CATEGORY_FIELDS + ('id', 'short_name')
        def get_add_form(self, request):
            if request.user.is_superuser:
                return NewCategoryAdminAPIForm
                return None
        def get_edit_form(self, request):
            if request.user.is_superuser:
                return EditCategoryAdminAPIForm
                return None
    class EditCategoryAdminAPIForm(AdminAPIForm):
        protected_fields = [
        class Meta:
            model = Category
            fields = CATEGORY_FIELDS
        def clean(self):
            cleaned_data = super(EditCategoryAdminAPIForm, self).clean()
            short_name = cleaned_data.get('short_name')
            if short_name is not None and len(short_name) > 20:
                err = 'Please limit to 20 chars (currently %d).' % (
                self._errors['short_name'] = self.error_class([err])
                del cleaned_data['short_name']
            if short_name is not None and not re.match('^[a-z0-9-\.]+$', short_name):
                self._errors['short_name'] = \
                    self.error_class(['Please limit to a-z,0-9.'])
                del cleaned_data["short_name"]
            name = cleaned_data.get('name')
            if name is not None and len(name) > 60:
                err = 'Please limit to 60 chars (currently %d).' % (
                self._errors['name'] = self.error_class([err])
                del cleaned_data['name']
            return cleaned_data
    class NewCategoryAdminAPIForm(EditCategoryAdminAPIForm):
        class Meta:
            model = Category
            fields = CATEGORY_FIELDS + ('short_name',)
  • Update admin_api/ We add 2 URL patterns for this model's API, to handle the model and collection verbs:
  • Update admin_api/ We add tests for the API, checking permissions and validation.
       def test_create_category(self):
        client = Client()
        cats_url = reverse('api_categories')
        # Test: super-user can create category
        data = {
            'name': 'Fake Category',
            'short_name': 'fake-cat',
        response = self.post_json(client, cats_url, data)
        response_json = simplejson.loads(response.content)
        self.assertEqual(response.status_code, 200)
        self.assertEqual(response_json['name'], data['name'])
        self.assertEqual(response_json['short_name'], data['short_name'])
        # Test: cant edit short name after its created
        cat_url = reverse('api_category', args=[response_json['id']])
        data['short_name'] = 'fakecat2'
        response = self.put_json(client, cat_url, data)
        response_json = simplejson.loads(response.content)
        self.assertEqual(response.status_code, 200)
        self.assertEqual(response_json['short_name'], 'fake-cat')
        # Test: Cant use a shitty short name
        data['short_name'] = 'SoGreat OMG'
        response = self.post_json(client, cats_url, data)
        response_json = simplejson.loads(response.content)
        self.assertEqual(response.status_code, 400)
            ['Please limit to a-z,0-9.'])
        # Test: instructors cant create categories
        response = self.post_json(client, cats_url, data)
        self.assertEqual(response.status_code, 400)

The Frontend

The site admin frontend is designed in a similar way as the backend: generic views that understand models and collections generally, with ways to specify the differences for each model.

ModelAdminPageView is responsible for creating a page with a header, banner, and then nesting a ModelAdminFieldsView which knows how to create a form with fields, buttons, and links to related models. To create that form, ModelAdminFieldsView calls upon a number of views which extend FieldView, like Select2View and HiddenInputView, and renders them depending on the attributes <-> fields mapping in a model.

CollectionAdminPageView is responsible for creating a page with a header, "new model" button, and then nesting a CollectionAdminListView, which knows how to create statistics charts via the nvd3 library and a tabular view of a collection via CollectionAdminTableView.

To add a model to the frontend, we follow these steps, using the category model as an example:

  • Create models/CategoryAdminModel.js: This model extends AdminModel, an extension of Backbone.Model. It defines properties that are needed by Backbone, like the API endpoint, as well as custom properties that are needed by the views (and yes, that is not a perfect separation of data and presentation, but life must go on). The custom properties include a mapping of attributes to form field types, any custom buttons and modals, attributes to filter by in the table view, and more.
    function($, Backbone, _, Coursera, AdminModel) {
      var model = AdminModel.extend({
        url: 'admin/categories',
        webUrlLabel: 'categories',
        label: 'Category',
        displayName: function() {
          return this.get('name');
        fieldsets: function() {
          return [{
            name: 'name',
            type: 'text'
          }, {
            name: 'short_name',
            type: 'text',
            readonly: !this.isNew()
      return model;
  • Create collections/CategoriesAdminCollection.js: This extends AdminCollection, and specifies a handful of properties needed by Backbone and our views. The bulk of the custom logic is in the model, not the collection.
    function(Backbone, Coursera, AdminCollection, CategoryAdminModel) {
      var collection = AdminCollection.extend({
        url: 'admin/categories',
        webUrlLabel: 'categories',
        label: 'Categories',
        model: CategoryAdminModel
      return collection;
  • Update site-admin/routes.js: This routes file extends Backbone.Router and defines generic URLs that can handle any AdminModel or AdminCollection. We add the new models and collections to modelAcls and collectionAcls, respectively, so that we know what collections to link a user to in the dashboard view. We enforce actual ACLs on the server-side, of course.

The future

The site admin API and frontend have served us reasonably well as we have grown to want more editing abilities and workflow improvements, but there is much to be improved upon.

Collaborative Editing

While developing site admin by myself in my living room, there was one thing that never occurred to me: I was building a collaborative editing interface. We have many different sorts of admins that edit course data, everywhere from super users to university admins to instructors to TAs, and sometimes, there could be multiple admins editing at once. I didn't realize this, however, until after we unleashed site admin for a big launch and I got emails at 4am about instructors losing data and freaking out. I quickly realized how easily that could happen, if two staff were working on a course on different machines. As a quick fix, I added a notion of "protected fields" - fields that weren't allowed to go from something to nothing - and that prevented the worst case of admins losing all the text they'd painstakingly inputted. It does add a problem, however, of admins legitimately wanting to clear fields sometimes, and engineers needing to manually make that change. It also doesn't protect against losing incremental data in a field.

To make site admin work better for multiple editors, there are a few approaches we've thought of, which could be combined in some optimal way:

  • Partial updates: Currently site admin does a PUT of the full data of the model, and saves it wholly. An approach we take in many of our frontends now is to use Backbone's changedAttributes to track what changed, and only do an HTTP PATCH and partial update of the model. That would mean two admins could edit different fields and not worry about overriding eachother's changes.
  • Real-time update: We could poll for updates to the model, updating the fields when we see changes. If the admin is currently editing an updated field, we could alert to confirm override or show them the changed version.
  • Notifications: We could keep track of what admins are on a page, and alert them about the possibility of concurrent edits, which might encourage them to consult with eachother about what changes they are making.
  • Change confirmations: We could detect that something had changed since the admin started editing, and prompt the admin to confirm that yes, indeed, they are okay with that change.

DRYer Permissions

We currently have code in the frontend that mirrors the permissions on the backend, like to figure out what collections a particular admin has access to at all, and to figure out what fields should be read-only for particular admins. This code is problematic since it means twice the code to change permission, and it's also dangerous because an engineer could fool themself into thinking that they'd secured something properly, if they did not put a test on the backend.

A better approach might be an API that sends down the permissions for an admin, perhaps using the HTTP OPTIONS verb. For example, an HTTP OPTIONS request to 'admin/api' might return the following:

{"collection": ["categories", "universities", "instructors"]}

Once loading the form for a particular model, the HTTP OPTIONS request might return the following, based on inspecting the Django Form instances:

{"fields": {
   "short_name": {"read_only": false, "restriction": "[a-zA-Z]", "max_length": "20"},

Those restrictions would also depend on whether the model was new or existing, and that would need to be represented in that API.

Soft Delete

When we do a delete in site admin now, it does a true delete, removing the rows and related rows from the database tables. That is a scary operation, of course, since it means that the only way to get back the data is to find it in an old database backup, provided we still have it around for the time of deletion.

We would prefer to do a soft delete, which would mean adding a "deleted" column to each model, setting that to true upon deletion, and changing all of our APIs to honor that deleted column when fetching data. This also makes it easier to do an undo. The biggest hurdle to this is change would be auditing all of the APIs that call upon that data to make sure they respect the change.

Drafts vs. Master

When an admin edits and saves their changes in site admin now, the changes are immediately live. Admins don't always want this; they often want to be able to preview their changes, feel confident in them, and then make them live. We have this in place for our course admin data, like quizzes and lectures, but it would require significant changes to the database tables, admin APIs, and user-facing APIs. A step in the right direction might be to make it possible for admins to preview their course pages with the unsaved data by sending it through an iframe and postMessage, perhaps. That is made more difficult by the fact that our admin APIs represent the data in a very different form than the user-facing APIs, however.

Improved Logs

We currently record who did what to which model, but the "what" is only whether it was a creation, edit, or delete. Ideally, we would also include exactly what fields were changed, and what the diff was between the fields. That would be made much easier to do by adding HTTP PATCH support. We also need better pagination and searching of the logs, and we may want to expose them to non-super-users.

Workflow-Based vs. Model-Based

Django admin revolves around models, assuming that the admin thinks to themself, "Yes, I'd like to edit such and such today." However, as it turns out, admins more often think in terms of goals or workflows, such as "Today I'd like to release grades." or "Today I'd like to open the course for enrollment and let all the subscribers know." These workflows often involve different models at each step, and it is non intuitive for the user to have to figure out that sequence. For a few of them, we've created "Checklist" views, with steps and links to the relevant models, anchor'ed at the form element that should be edited. But I think it would be worth it to re-think the admin interface from scratch with the idea of workflows being a first class citizen, instead of something we've tacked on at the end.

Wednesday, July 24, 2013

A Guide to Writing Backbone Apps at Coursera


At Coursera, we made the choice to use the Backbone MVC framework for our frontends, and over the past year, we've evolved a set of best practices for how we use Backbone.

I wrote a guide for internal use that documents those best practices (much of it based on shorter blog posts here), and I've snapshotted it here on my blog to benefit other engineering teams using Backbone and to give potential Coursera engineers an idea of the current stack. This was snapshotted on July 24th, 2013, so please keep in mind that the Coursera frontend stack may change over time as the team figures out new and better ways to do things.

If you're interested in joining Coursera, check out the many job listings here. The frontend team is a really smart and fun bunch, and there are a lot of interesting technical and usability challenges in the future.

The Architecture

There are many different frontend architectures to choose from, and at Coursera, we have made the deliberate decision to opt for a very JavaScript-heavy, JavaScript-dependent approach to our frontend architecture:

We build up the entire DOM in JavaScript, loading the data via calls to RESTful JSON APIs, and handle state changes in the URL via the hash or HTML5 history API.

This approach has several advantages, atleast as compared to a traditional data-rendered-into-HTML approach:

  • Usabiity: Our interfaces can easily be dynamic and real-time, enabling users to perform many interactions in a small period of time. This is particularly important for our administrative interfaces, where users want to be able to drag-and-drop, tick things on and off, and generally manipulate many little things that are present on one screen.
  • Developer Productivity: Since this architecture relies on the existence of APIs, that makes it easy for us to build new frontends for the same data, which encourages experimentation with new ways of viewing the same data. For example, after porting our forums to this architecture, I was able to create portable sidebar widgets based off the forums API in just a few hours.
  • Testability: The APIs and the frontends can both be tested separately and rigorously using the best suite of tools for the job.

It also has a few disadvantages:

  • Linkability: We have to go through a bit more work to make the JS-powered interfaces linkable, and previously simple things like internal anchors (page#section) are surprisingly difficult to implement.
  • Search/shareability: Since Facebook bots and search bots do not handle JS-rendered webpages as well, we have to go through more work to make our public pages indexable by them, which we've done through our Just-in-time renderer.
  • Testability: We have to write far more tests for our JS frontends since the user can change state via sequences of interactions, and some bugs may not surface until a particular sequence. We also now have state across URL routes when we use the HTML5 history API, and may have to test across multiple views.
  • Performance: We must be constantly monitoring our JavaScript to make sure we are not pushing the browser to do too much, as JavaScript can still be surprisingly slow at processing data and turning it into DOM.

However, given the usability benefits of the JS-rendered approach, we have elected to stick with it, and we will need to become experts in overcoming the disadvantages of the approach. At the same time, we can hope that the browsers and tools make those disadvantages slowly disappear, as this is an increasingly popular approach.

The APIs

We have APIs coming from Python/Django, PHP, and Scala/Play. We try to be consistent in the API design, and when possible, we opt for a RESTful JSON API.

For example, if I want to retrieve information about a forum, we'd perform an HTTP GET to a RESTful URL and expect a JSON to come back with an "id" attribute and other useful attributes.


HTTP GET /api/forums/1


    "id": 1,
    "parent_id": -1,
    "name": "Forums",
    "deleted": false,
   "created": "1369400797"

To create a new forum, we'd perform an HTTP POST to a RESTful URL with our JSON, and expect JSON to come back with the "id" filled in:


HTTP POST /api/forums/
    "parent_id": -1,
    "name": "Forums"


    "id": 1,
    "parent_id": -1,
    "name": "Forums",
    "deleted": false,
   "created": "1369400797"

To update an existing forum, we could do an HTTP PUT with the full JSON of the new properties, but when possible, we prefer to do an HTTP PATCH, only sending in the changed properties. That is a safer approach and means we are less likely to change attributes that we did not intend to change, and also makes our interfaces more usable by multiple people at once.


HTTP PATCH /api/forums/1
    "name": "Master Forums"


    "id": 1,
    "parent_id": -1,
    "name": "Master Forums",
    "deleted": false,
   "created": "1369400797"

To delete a forum, we could do an HTTP DELETE, but we prefer instead to set a deleted flag on the object, and make sure that we respect the flag in all of our APIs. We often have users that accidentally delete things, and it is much easier to restore if the information is still in the database.


HTTP PATCH /api/forums/1
    "deleted": true


    "id": 1,
    "parent_id": -1,
    "name": "Master Forums",
    "deleted": true,
   "created": "1369400797"

If we are retrieving many resources, we may want a paginated API, to avoid sending too much information down to the user. Here's what that might look like:


HTTP GET /api/forum/search?start_page=1&page_size=20


{"start_page": 1,
  "page_size": 20,
  "total_pages": 40,
  "total_results": 800,
  "posts":  [ ... ]

The JavaScript

JavaScript is a powerful language, but it can easily become a jumbled mess of global variables and files that are thousands of lines long. To keep our JavaScript sane, reusable, and modularized, we chose to use an MVC framework. There are approximately a million* MVC frameworks to choose from, but we chose Backbone.JS as it has a large community of knowledge built up around it and it is lightweight enough to be used in many different ways.


Backbone provides developers with a means of separating presentation and data, by defining Models and Collections for the data, Views for the presentation, and triggering a rich set of events for communication between the models and views. It also provides an optional Router object, which can be used to create a single page web app that triggers particular views based on the current URL route. For a general introduction to Backbone, see these slides.

There is a lot that Backbone does not provide, however, and it's up to the app developer to figure out what else that app needs, and how much of that you'll get from open-source libraries or decide to write yourselves. That's good because Backbone can lend itself to many different sorts of apps, with the right combination of add-ons, but it's bad because it takes longer to find those add-ons and get them working happily together. As a company, it is in our best interest to converge on a recommended set of add-ons and best practices, so that our code is more consistent across the codebase. At the same time, it's also in our best interest to continually challenge our best practices and make sure that we are using the right tool for the job. If we discover a particular add-on is too buggy or slow, we should phase that out of the codebase and document the reasons why.

There is a larger question, of course: Is Backbone the right framework for us, given how many new frameworks have come out recently that may entice us with promises of speed and flexibility? That is not a question that I have an answer for, but I do think that one can spend forever trying out new frameworks to find the perfect one, and time might be better spent building up best practices around a single framework. However, there may be a time at which we become sufficiently convinced that Backbone is no longer working for our codebase and it is worth the cognitive effort and engineering resources to invest in a new framework.

Here's an exploration of the add-ons and best practices that we use in our Backbone stack.

Backbone Models

A basic model might look like this:

],function(_, Backbone, Coursera, BackboneModelAPI) {

  var model = Backbone.Model.extend({
    api: Coursera.api,
    url: 'user/information'

  _.extend(model.prototype, BackboneModelAPI);

  return model;

We start off by declaring the JS dependencies for the model:

  • underscore: This is a collection of generic utility functions for arrays, objects, and functions, and it's common to find yourself using them, so most models and views will include it.
  • backbone: This is necessary for extending Backbone.Model
  • pages/forum/app: Every model will depend on an "app.js", which defines a base URL for API calls and a few other details. It adds objects to the Coursera singleton variable, like Coursera.api, which is used by the Model.
  • js/lib/backbone.api: This is a Backbone-specific wrapper for api.js that overrides the sync method and adds create/update/read/delete methods. The api.js library is an AJAX API wrapper that takes care of emulating patch requests, triggering events, showing AJAX loading/loaded messages via asyncMessages.js, and creating CSRF tokens in the client.

Then we define an extension of the Backbone.Model object with api and url options that help Backbone figure out where and how to pull the data for the model, and it mixes in the BackboneModelAPI prototype at the end of the file.

Backbone Models: Relational Models

Out of the box, Backbone will take JSON from a RESTful API and automatically turn it into a Model or a Collection of Models. However, we have many APIs that return JSON that really represent multiple models (from multiple tables in our MySQL database), like courses with universities:

[{"name": "Game Theory",
 "id": 2,
 "universities": [{"name": "Stanford"}, {"name": "UBC"}

We quickly realized we needed a way to model that on the frontend, if we wanted to be able to use model-specific functionality on the nested models (which we often do).

Backbone-relational is an external library that makes it easier to deal with turning JSON into models/collections with sub collections inside of them, by specifying the relations like so:

var Course = Backbone.RelationalModel.extends({
   relations: [{
      type: Backbone.HasMany,
      key: 'universities',
      relatedModel: University,
      collectionType: Universities

We started use that for many of our Backbone apps, but we've had some performance and caching issues with it, so we've started stripping it out in our model-heavy-apps and manually doing the conversion into nested models.

For example, heres how the Topic model turns nested courses array into a Courses collection:

  var Topic = Backbone.Model.extend({
    defaults: {},

    idAttribute: 'short_name',

    initialize: function() {
      this.bind('change', this.updateComputed, this);

    updateComputed: function() {
      var self = this;
      if (!this.get('courses') || !(this.get('courses') instanceof Courses)) {
        this.set('courses', new Courses(this.get('courses')), {silent: true});
        this.get('courses').each(function(course) {
          if (!course.get('topic') || !(course.get('topic') instanceof Topic)) {
            course.set('topic', self);

For a trickier example, here's how the Course model sets a nested Topic model. It has to require the Topic file dynamically, to avoid a cyclic dependency in the initial requires which will wreak all sorts of havoc:

  var course = Backbone.Model.extend({
    defaults: {},

    initialize: function() {
      this.bind('change', this.updateComputed, this);

    updateComputed: function () {
      // We must require it here due to Topic requiring Courses
      var Topic = require("js/models/topic");
      if (this.get('topic') && !(this.get('topic') instanceof Topic)) {
        this.set('topic', new Topic(this.get('topic')), {silent: true});

We could also look into using Backbone.nested, which seems like a more lightweight library than Backbone.Relational, and it may have less performance issues.

Backbone Views

Here's what a basic Backbone view might look like:

function($, _, Backbone, Coursera, template) {
  var view = Backbone.View.extend({
    render: function() {
      var field = this.options.field;
        config: Coursera.config
        field: field,
      return this;

  return view;

We start off by declaring the JS dependencies for the view:

  • jquery: We often use jQuery in our views for DOM manipulation, so we almost always include it.
  • underscore: Once again, underscore's utility functions are useful in views as well as models (in particular, debounce and throttle are great for improving performance of repeatedly called functions.)
  • backbone: We must include Backbone so that we can extend Backbone.View.
  • js/core/coursera: We include this so that we have a handle on the Coursera singleton variable, which contains useful information like "config" that includes the base URL of assets, which we often need in templates.
  • pages/site-admin/views/NoteView.html: This is a particular Jade template that's been auto-compiled into an *.html.js file, and we include it so we can render the template to the DOM. We try to keep all of our HTML and text in templates, out of our view JS.

Then we create the view and define the render function, which passes in Coursera.config and a view configuration option into a template, and renders that template into the DOM.

Backbone Views: Templating

Backbone requires Underscore as a dependency, and since Underscore includes a basic templating library, that's the one you'll see in the Backbone docs. However, we wanted a bit more out of our templating library.

Jade is a whitespace-significant, bracket-less HTML templating library. It's clean to look at because of the lack of brackets and the enforced indenting (like Python and Stylus), but one of it's best features is that it auto-closes HTML tags. We've dealt with too many strange bugs from un-closed tags, and it's one more thing we don't have to worry about when using Jade. Here's an example:

    h1 {#book.get('title')}
    each author in book.get('authors')
        a(href=author.get('url')) {#author.get('name')}
    if book.get('published')
        a.btn.btn-large(href="/buy") Buy now!

We could also consider using Handlebars, Mustache, or many other options.

Backbone Views: Referencing DOM

Inside a view, we find ourselves referencing the DOM from the templates repeated times, like to set up events, read off values, or do slight manipulations. For example, here's what a view might look like:

var ReporterView = Backbone.View.extend({
  render: function() {
  events: {
     'change .coursera-reporter-input': 'onInputChange'
     'click .coursera-reporter-submit': 'onSubmitClick'
  onInputChange: function() {
    this.$('.coursera-reporter-submit').attr('disabled', null);
  onSubmitClick: function() {
    this.model.set('title', this.$('.coursera-reporter-input').val());;

There are a few non-optimal aspects of the way that we reference DOM there:

  • We are repeating those class names in multiple places. That means that changing the class name means changing it in many places - not so DRY!
  • We are using CSS class names for events and manipulation. That means our designers can't refactor CSS safely without affecting functionality, and it also means that we must come up with very long overly explicit class names to avoid clashing with other CSS names, since we bundle our CSS together..

To avoid repeating the class names, we can store them in a constant that is accessible anywhere in the view, and only access them via that constant. For example:

var ReporterView = Backbone.View.extend({
  dom: {
     SUBMIT_BUTTON: '.coursera-reporter-submit',
     INPUT_FIELD:   '.coursera-reporter-input'
  render: function() {
  events: function() {
    var events = {};
    events['change ' + this.dom.INPUT_FIELD]    = 'onInputChange';
    events['click ' +  this.dom.SUBMIT_BUTTON]  = 'onSubmitClick';
    return events;
  onInputChange: function() {
    this.$(this.dom.SUBMIT_BUTTON).attr('disabled', null);
  onSubmitClick: function() {
    this.model.set('title', this.$(this.dom.INPUT_FIELD).val());;

As a bonus, this technique gives us easier-to-maintain testing code:

it('enables the submit button on change', function() {

As for the use of class names entirely, we can avoid them by using data attributes instead, perhaps prefixing with js-* to indicate their use in JS. We would still have CSS class names in the HTML templates, but only for styling reasons.

So then our DOM would look something like:

var ReporterView = Backbone.View.extend({
  dom: {
     SUBMIT_BUTTON: '[data-js-submit-button]',
     INPUT_FIELD:   '[data-js-input-field]'

Note that selecting via data attributes shown to be less performant but for the vast majority of our views, that performance difference is insignificant.

Backbone Views: Data Binding

Backbone makes it easy for you to find out when attributes on your Model have changed, via the "changed" event, and to query for all changed attributes since the last save via the changedAttributes method, but it does not officially offer any data ⟺ dom binding. If you are building an app where the user can change the data after it's been rendered, then you will find yourself wanting some sort of data binding to re-render that data when appropriate. We have many parts of Coursera where we need very little data-binding, like our course dashboard and course description pages, but we have other parts which are all data-binding, all-the-time, like our discussion forums and all of our admin editing interfaces.

Backbone.stickit is a lightweight data-binding library that we've started to use for a few of our admin interfaces. Here's a simple example from their docs:

Backbone.View.extend({bindings: {
    '#title': 'title',
    '#author': 'authorName'
  },render: function() {
    this.$el.html('<div id="title"/><input id="author">');

We still do custom data-binding for many of our views (using the "changed" event, changedAttributes(), and partial re-rendering), and I like that because it gives me the most control to decide exactly how a view should change, and I don't have to fight against a binding library's assumptions.

We could also consider using: KnockBack

Maintaining State: Single-Page-Apps vs. Widgets

After we've created a view for our frontend, we still have big decisions to make:

  • How will users get to that view?
  • What state of the view will be kept in the URL, i.e., what can the user press back on and what can they bookmark?
  • Will our view be used in multiple parts of the site or just one?

In our codebase, we have two main approaches to those questions: "single page apps" and "widgets".


Besides being the buzz word du jour, a single-page-app ("SPA") is what Backbone was originally designed for, via its Backbone.Router object. A SPA defines a set of routes, and each route is mapped to a function that renders a particular view into a part of the page. Backbone.History then takes care of figuring out which route is referred to by the current URL, and calling that function. It also takes care of changing the URL using the HTML5 History API (which makes it appear like a normal URL change) or window.location.hash in older browsers.

For example, we could have this routes file:

function($, Backbone, Coursera) {

  var routes = {};
  var triageurl   = Coursera.config.dir.home.replace(/^\//, "triage");

  routes[triageurl + '/items'] = function() {
    new MainView({el: $('.coursera-body')});

  $(document).ready(function() {
      Backbone.history.start({pushState: true});

After declaring its dependencies, it defines a mapping of routes, adds those to our global Coursera.router (an extension of Backbone.Router) and then kicks off Backbone.history.start() on page load.

SPAs: Syncing Users

More typically, for our logged in-apps, we will attempt to login the user before calling the routes, and our document.ready callback will look like this:

    (new User())
      .sync(function(err) {
      Coursera.user = this;
      if (!Backbone.history.start({
        pushState: true
      })) {
        Coursera.router.trigger("error", 404);
SPAs: Regions

Backbone lets you create views and render views into arbitrary parts of your DOM, but many developers soon run into the desire for standard "regions" or "layouts". We want to specify different parts of their page, and only swap out the view in those parts across routes - like the header, footer, and main area. That's a better user experience, since there's no unnecessary refreshing of unchanging DOM.

For that, we use origami.js, a custom library that lets us create regions associated with views, and then in a route, we'll specify which region we want to replace with a particular view file, plus additional options to pass to that view. In the view, we can bind to region events like "view:merged" or "view:appended" and take appropriate actions.

In our SPAs, we always render into the regions instead, so our routes code looks more like this. It is a bit of an unwieldy syntax, but it gets the job done:

routes[triageurl + '/items/:id'] = function(id) {{
      "pages/home/template/page": {
        regions: {
          body: {
            "pages/triage/views/MainView": {
              id: "MainView",
              initialize: {
                openItemId: id

We could also consider using: Marionette.js or Chaplin.

SPAs: Dirty Models

In traditional web apps, it's common practice to warn a user before leaving a page that they have unsaved data, using the window.onunload event. However, we no longer have that event in Backbone SPAs, since what looks like a window unload is actually just a region swap in JS. So, we built a mechanism into origami.js that inspects a view for a "dirty model" before swapping a view, and it throws up a modal alert if it detects that.

To utilize this, a view needs to specify a hasUnsavedModel function and return true or false from that:

var view = Backbone.View.extend({
    // ...
    hasUnsavedModel: function() {
       return !this.$':disabled');
SPAs: Internal Links

In traditional web apps, it is easy to link to a part of a page using an internal anchor, like /terms#privacy. However, in a SPA, the hash cannot be used for internal anchors, since it is used as the fallback technology for the main URL in some browsers, and the URL would actually be /#terms#privacy. We have experimented with various alternative approaches to internal links, and the current favorite approach is to use a URL like /terms/privacy, define a route that understands that URL, pass the "section" into the view, and use JS to jump to that part of the view, post-rendering. For example:

In the routes file:

  routes[home + "about/terms/:section"] = function(section) {{
      "pages/home/template/page": {
        regions: {
          body: {
            "pages/home/about/tosBody": {
              initialize: {section: section}

In the view file:

var tosBody = body.extend({
    initialize: function() {
      var that = this;
      document.title = "Terms of Service | Coursera";

      that.bind("view:merged", function(options) {
        if(options && options.section)
          util.scrollToInternalLink(that.$el, options.section);
    // ...

In the Jade template:

h2(data-section="privacy") Privacy Policy


In some cases, we do not necessarily want our Backbone view to take full control over the URL, like if we want to easily have arbitrary, multiple Backbone views on the same page. We take that approach in our class platform, because that will ultimately make it easier for professors who want to compose together views to their own liking (i.e. if they'd like to mix a forum thread and a wiki view on the same page, that should be easy for them.)

To create a widget, we use a declarative HTML syntax, specifying data attributes that define the widget type and additional attributes to customize that instance of the widget:

<div data-coursera-reporter-widget
Just one moment while we load up our reporter wizard...

Then, we create a widgets.js file that will be included on that page, and knows how to turn DOM elements into Backbone views. Typically that file would know about multiple widgets, but we show one here to save space:

function($, _, Backbone, Coursera, ReporterView) {
  $(document).ready(function() {

    $('[data-coursera-reporter-widget]').each(function() {
      var title = $(this).attr('data-coursera-reporter-title');
      var url = $(this).attr('data-coursera-reporter-url');
      new ReporterView({el: $(this)[0],itemTitle: title, itemUrl: url}).render();


Widgets: Maintaining State

We still want to maintain state within those views and support the back button, however, without changing the main URL of the page.

jQuery BBQ is an external non-Backbone specific library for maintaining history in the hash, and as it turns out, it works pretty well with Backbone. You can read my blog post on it for a detailed explanation.

We could also considering using: Backbone.Widget.

Testing Architecture

First, let it be said: testing is important. We are building a complex product for many users that will pass through many engineer's hands, and the only way we can have a reasonable level of confidence in making changes to old code is if there are tests for it. We will still encounter bugs and users will still use the product in ways that we did not expect, but we can hope to avoid some of the more obvious bugs via our tests, and we can have a mechanism in place to test regressions. Traditionally, the frontend has been the least tested part of a webapp, since it was traditionally the "dumb" part of the stack, but now that we are putting so much logic and interactivity into our frontend, it needs to be just as thoroughly tested as the backend.

There are various levels of testing that we could do on our frontends: Unit testing, integration testing, visual regression testing, and QA (manual) testing. Of those, we currently only do unit testing and QA testing, but it's useful to keep the others in mind.

Unit Testing

When we call a function with particular parameters, does it do what we expect? When we instantiate a class with given options, do its methods do what we think they will? There are many popular JS unit testing frameworks now, like Jasmine, QUnit, and Mocha.

We do a form of unit testing on our Backbone models and views, using a suite of testing technologies:

  • Mocha: An open-source test runner library that gives you a way to define suites of tests with setup and teardown functions, and then run them via the command-line or browser. It also gives you a way to asynchronously signal a test completion. For example:
    describe('tests for the reporter library', function() {
      beforeEach(function() {
        // do some setup code
      afterEach(function() {
       // do some cleanup code
      it('renders the reporter template properly', function() {
        // test stuff
      it('responds to the ajax request correctly', function(done) {
        // in some callback, call:
  • Chai: An open-source test assertion library that provides convenient functions for checking the state of a variable, using a surprisingly readable syntax. For example:
  • JSDom: An open-source library that creates a fake DOM, including fake events. This enables us to test our views without actually opening a browser, which means that we can run quite a few tests in a small amount of time. For example, we can check that clicking changes some DOM:
         var view = new ReporterView().render();
          var $tips = view.$el.find('[data-problem=quiz-wronggrade]');
  • SinonJS: An open-source library for creating stubs, spies, and mocks. We use it the most often for mocking out our server calls with sample data that we store with the tests, like so:
        var forumThreadsJSON  = JSON.parse(fs.readFileSync(path.join(__filename, '../../data/forum.threads.firstposted.json')));
       server    = sinon.fakeServer.create();
       server.respondWith("GET", getPath('/api/forum/forums/0/threads?sort=firstposted&page=1'), 
            [200, {"Content-Type":"application/json"}, JSON.stringify(forumThreadsJSON)]);
       // We call this after we expect the AJAX request to have started

    We can also use it for stubbing out functionality that does not work in JSDom, like functions involving window properties, or functionality that comes from 3rd party APIs:

          var util = browser.require('js/lib/util');
          sinon.stub(util, 'changeUrlParam', function(url, name, value) { return url + value;});
          var BadgevilleUtil = browser.require('js/lib/badgeville');
          sinon.stub(BadgevilleUtil, 'isEnabled', function() { return true;});

    Or we can use it to spy on methods, if we just want to check how often they're called. Sometimes this means making an anonymous function into a view method, for easier spy-ability:

        sinon.spy(view, 'redirectToThread');
        // do some stuff to call function to be called

Besides those testing-specific libraries, we also use NodeJS to execute the tests, along with various Node modules:

  • require: Similar to how we use this in our Backbone models and views to declare dependencies, we use require in the tests to bring in whatever libraries we're testing.
  • path: A library that helps construct paths on the file system.
  • fs: A library that helps us read our test files.

Let's see what all of that looks like together in one test suite. These are a subset of the tests for our various about pages. The first test is a very simple one, of a basically interaction-less, AJAX-less posts. The second test is for a page that does an AJAX call:

describe('about pages', function() {
  var chai = require('chai');
  var path = require('path');
  var env  = require(path.join(testDir, 'lib', 'environment'));
  var fs   = require('fs');

  var Coursera;
  var browser;
  var sinon;
  var server;
  var _;

  beforeEach(function() {
    browser = env.browser(staticDir);
    Coursera  = browser.require('pages/home/app');
    sinon = browser.require('js/lib/sinon');
    _ = browser.require('underscore');

  describe('aboutBody', function() {

    it('about page content', function() {
      var aboutBody = browser.require('pages/home/about/aboutBody');
      var body      = new aboutBody();
      var view      = body.render();

      chai.expect(document.title)'About Us | Coursera');

  describe('jobsBody and jobBody', function(){

    var jobs     = fs.readFileSync(path.join(__filename, '../../data/about/jobs.json'), 'utf-8');
    var jobsJSON = JSON.parse(jobs);

    beforeEach(function() {
      server = sinon.fakeServer.create();
      server.respondWith("GET", Coursera.config.url.api + "common/jobvite.xml", 
        [200, {"Content-Type":"application/json"}, jobs]);

    it('job page content', function(done) {
      var jobBody = browser.require('pages/home/about/jobBody');
      var view      = new jobBody({jobId: jobsJSON[0].id});

      var renderJob = sinon.stub(view, 'renderJob', function() {
        view.renderJob.apply(view, arguments);
        chai.expect(view.$('.coursera-about-body h2').text())

      chai.expect(document.title)'Jobs | Coursera');


Integration testing

Can a user go through the entire flow of sign up, enroll, watch a lecture, and take a quiz? This type of testing can be done via Selenium WebDriver, which opens up a remote controlled browser on a virtual machine, executes commands, and checks expected DOM state. The same test can be run on multiple browsers, to make sure no regressions are introduced cross-browser. They can be slow to run, since they do start up an entire browser, so it is common to use cloud services like SauceLabs to distribute tests across many servers and run them in parallel on multiple browsers.

There are client libraries for the Selenium WebDriver written in several languages, the most supported being Java and Python. For example, here is a test for our login flow that enters the user credentials and checks the expected DOM:

from import By
import BaseSitePage

class SigninPage(BaseSitePage.BaseSitePage):
    def __init__(self, driver, waiter):
        super(SigninPage, self).__init__(driver, waiter)

    def valid_login(self, email, password):
        self.enter_text('#signin-email', email)
        self.enter_text('#signin-password', password)'.coursera-signin-button')
        self.wait_for(lambda: \
                self.is_title_equal('Your Courses | Coursera') or \

We do not currently run our Selenium tests, as they are slow and fragile, and we have not had the engineering resources to put time into making them more stable and easier to develop locally. We may out source the writing and maintenance of these tests to our QA team one day, or hire a Testing engineer that will improve them, or both.

Visual regression testing

If we took a screenshot of every part of the site before and after a change, do they line up? If there's a difference, is it on purpose, or should we be concerned? This would be most useful to check affects of CSS changes, which can range from subtle to fatal.

There are few apps doing this sort of testing, but there's a growing recognition of its utility and thus, we're seeing more libraries come out of the woodwork for it. Here's an example using Needle with Selenium:

from needle.cases import NeedleTestCase

class BBCNewsTest(NeedleTestCase):
    def test_masthead(self):
        self.assertScreenshot('#blq-mast', 'bbc-masthead')

There's also Perceptual Diffs, PhantomCSS, CasperJS, and SlimerJS. For a more manual approach, there's the Firefox screenshot command with Kaleidoscope. Finally, there's dpxdt (pronounced depicted).

We do not do visual regression testing at this time, due to lack of resources, but I do think it would be a good addition in our testing toolbelt, and would catch issues that no other testing layers would find.

QA (manual) testing

If we ask a QA team to try a series of steps in multiple browsers, will they see what we expect? This testing is the slowest and least automate-able, but it can be great for finding subtle usability bugs, accessibility issues, and cross-browser weirdness.

Typically, when we have a new feature and we've completed the frontend per whatever we've imagined, we'll create a worksheet in our QA testing spreadsheet that gives an overall description of the feature, a staging server to test it on, and then a series of pages or sequences of interactions to try. We'll also specify what browsers to test in (or "our usual" - Chrome, FF, IE, Safari, iPad), and anything in particular to look out for. QA takes about a night to complete most feature tests, and depending on the feedback, we can put a feature through multiple QA rounds.

Additional Reading

The following slides and talks may be useful as a supplement to this material (and some of it served as a basis for it):

Tuesday, July 23, 2013

What to look for in a software engineering culture

When I chat with new programmers that are interviewing for their first ever software engineering job, I encourage them to try to figure out if they'll be in a healthy engineering culture at the prospective job by asking the right questions. For a comprehensive run-down of what a great engineering team is made up of, I typically tell them to read Team Geek, a book by my former colleagues, but for a shorter list, I've written up this post with my own thoughts based on my experiences so far at Google and Coursera.

It's important to find a job where you get to work on a product you love or problems that challenge you, but it's also important to find a job where you will be happy inside their codebase - where you won't be afraid to make changes and where there's a clear process for those changes.

Here are some of the things that I look for:

Code Reviews

Are code reviews a regular part of their process?

Before my first job at Google, I had never experienced a code review before. Now that I have, I never want to submit code to a shared codebase without a review. The point of a code review isn't for another engineer to spot a flaw in your code (thought that can happen), the point is for them to make sure the code entering the codebase is following your team's conventions and best practices - to make sure that code "belongs". Code reviews are a great way for new engineers to learn how to work inside a new codebase, and also a way for existing engineers to discover best practices their colleagues have discovered in other projects, or learn about functionality they didn't realize they had.

Using the Mondrian tool at Google, we had a very clear code review process, where a changelist could not actually be submitted until the reviewer gave the "approval." Using Github's more lightweight code reviews at Coursera, we've had to come up with our own conventions on top of it, where the reviewer will say "Merge when ready" when they're happy or the reviewee will say "Please take another look" if they want a second review.

Regardless of the particulars of the process, you want to be on a team where code reviews are required for every bit of code, and the reviews are there to make everyone's code fit better together.

Coding Conventions

Do they follow standard conventions for every language in their stack?
Do they have their own conventions for the frameworks they use?

Google has publicly documented coding conventions for all their languages and internally, they actually have a formal process to verify that an engineer is well-versed in a particular language's conventions - you submit a 300-line code review in that language to a designated reviewer, and if it's approved, you've earned "readability" in that language and that grants you code submission rights. That process was quite useful as a learning tool for me, and it was a proud moment when I earned my JS readability badge, but that may be a bit much for most teams.

At Coursera, we document what coding conventions we follow for each language, and we try to use the industry standard if one exists, and then we encourage engineers to install plugins that will check for violations (like SublimeLinter with PEP8). We also have a build tool that uses JSHint to check for major JS syntax no-nos, and our Jenkins build tests for PEP8 violations. Besides the languages we use, we also document conventions for frameworks like Backbone.JS and come up with design guidelines for our REST APIs.

Basically, any time that there is a technology that can be used in multiple ways, and a team has decided upon a particular way to use it, that should be documented as a convention. As an engineer joining the team, you want to know that you'll have conventions to help you decide how to do what you do, so that your code will become part of a cohesive codebase.


Is complex code commented?
Are there readmes describing higher-level systems in the codebase?

We tend to think that our code is obvious, but, as the writers of it, we're obviously biased. That's why I think an important part of a good code review is for the reviewer to be honest when they're reading code that doesn't make immediate sense, and for them to ask for a comment to be added. I also find that sometimes higher-level overviews are needed for systems that span multiple parts of the codebase, and those may be particularly useful for new engineers that join and find themselves tasked with adding to an existing system.


Are there any tests?
What kind of tests?
How often are the tests run?
Is there a testing requirement for new features?
Is there a testing engineer or testing team?

As it happens, many codebases start off as prototypes, with the engineers behind them thinking "Ah, we don't need to test this thing, we'll just throw it away." But then those untested codebases become the real product, and the longer a codebase goes untested, the harder it is to introduce tests. When we found ourselves in that situation at Coursera, we held a "Testathon", locking ourself in a room until we figured out how to test all the different parts of our codebases, and had written a few example tests for each part. Then, for every feature going forward, we made the question "are there tests?" a regular part of the code review process.

There are different approaches to testing that are all valid - some teams might practice TDD, where they write the tests before the regular code, some teams may require tests as part of writing code (but in whatever order the engineer prefers), and some teams may hire testing engineers and entrust them with writing the tests and keeping the tests stable.

The thing to look for is whether a team cares about testing at all, or if they simply rely on engineers writing "safe code". You do not want to join a team that expects you to magically write untested code that doesn't break other parts of the system that you've never seen and that will never be broken by another engineer's future code. You want to join a team that knows the value of tests, and is doing what they can to make that a part of the culture.

Release process

How often do they deploy their code?
How fast is the process?
How fast is the rollback?
Who's in charge of a deploy?

When I was working with the Google Maps team, we had weekly release cycles, and there'd be quite a few changes in each release. The team was big enough that there was a designated "build cop" who oversaw each release, checking the tests passed, deciding which changelists were worth a cherry pick, and monitoring the roll out of the release across the servers.

At Coursera, we wanted to be able to release smaller changes, more often - several times a day, if possible. It was hard at first because our deploys took 45 minutes - and so did our rollbacks! Fortunately, our infrastructure team came up with a new user-friendly deploy tool that takes only a few minutes to roll new code out to the AWS servers or roll it back. I used to have mini heart attacks every time I deployed, fretting over how horrible it would be if I accidentally took down the site, but now, I deploy happily because I know I can take it back quickly. Of course, we also have tests that are automatically run before deploys, and we always check the status of those first.

The deploy process can vary greatly across companies, and even in teams within a company, but the important thing is that there *is* a process, and ideally it's one that will allow you to spend most of your time writing code, not releasing it.


Shit happens. When shit does happen, does the team take steps to prevent that shit from happening in the future?

At Google, I learnt the value of post-mortems, documents that detail the timeline of an issue, the "things that went right", the "things that went wrong", and assigned action items to prevent it that were actually reasonable to get done. We wrote up post-mortems for everything in the company, not just in engineering (I wrote up an epic post-mortem after a badly worded tweet landed me on TechCrunch, for example). We do the same at Coursera, and we share the post-mortems with the entire company and sometimes even with our university partners.

It's common-place these days for public-facing companies to put post-mortems on their blogs, and to me, it's a sign that the companies believe in being transparent to their users, and that's a good thing. You want to be at a company that regularly writes up post-mortems, because then you'll know that they care about learning from their mistakes and not just sweeping them under a carpet.

Stack Stability

How often are they changing their stack? Too often?

It's good to try out new technologies that can vastly improve a codebase, but it can also be a big time suck, and a sign that the focus is not in the right place. Plus, every time a team tries out a new technology, it almost always leaves a bit of code using the old stack, which leaves the codebase as a stack of legacy codebases that most engineers don't know how to touch.

Ideally, when you join a team, they will be able to tell you something like "We use X, Y, and Z. We are evaluating W due to repeated issues with X, and coming up with a plan to see whether the migration would be worth it." If there is a "W", that "W" is hopefully a tried-and-true technology, not the latest thing to hit the front page of HN, otherwise your team will be the one that has to learn all the quirks and drawbacks of "W" from scratch.

What else?

There are also things to look for in the broader company culture too, like communication, transparency, prioritization, humility, but that's a whole other post.

Now, don't expect every engineering culture to have gold stars on every bullet point - but, look for signs that they are headed in that direction, that they do sincerely want to make their codebase a place where everyone can enjoy coding and learning from each other.

What did I miss? Let me know in the comments what you look for!