Friday, April 27, 2012

Learning to Code Online

In my closing keynote at Mix-It 2012, I talked about why I think everyone should learn programming as a basic skill and then about the various non-traditional ways that programming can be learned, both online and in person. I personally prefer the approach of learning to code in person, around humans that can be there to explain why things work (or don't work), but not everyone has the opportunity to learn in person — so it's great that there are people out there experimenting with online education.

While researching the talk, I was surprised to find that there are a lot of ways to learn to code online, many of which I wasn't familiar with, so I thought I'd take a minute to list them here. If you've tried any of these or know anyone who has, let me know in the comments how it was.

University Lectures

Many top-tier universities are now making their curriculum available for free online, like their videos and sometimes homework and lecture notes. You won't earn an actual degree from watching them, but you will benefit from seeing how some of the smartest professors explain computer science concepts.

  • UNSW: Computer Science professor Richard Buckland puts many of his computer science course lectures available on YouTube. (My roommate learned coding from watching his talks!)
  • MIT: The entire university has an OpenCourseWare initiative to put as much as their lectures online as they can, and that includes about 244 courses from the department of Computer Science and engineering.
  • Stanford: They are experimenting with the concept of virtual students for some of their classes (including some non-CS ones), where you can sign up for free, watch the lectures as they happen and actually do the homework.
  • Coursera: A site started by Stanford professors, it is aggregating together a few top-tier universities following in Stanford's steps and encouraging more to join.
  • Udacity: A similar site to Coursera (and also started by Stanford professors!).


Online Courses

Many folks have realized that you can teach programming without being a university professor and are building platforms and tools to teach online and/or to encourage people to teach eachother.

  • Udacity: A set of nanodegrees to teach you modern skills in particular career areas, like frontend or mobile. Often taught by industry professionals, and includes code reviews from alums.
  • Khan Academy: A non-profit that wants to make education freely available, they have interactive courses on JS, HTML, and SQL.
  • CodeSchool: A site with interactive tutorials and videos, ranging from newbie (like the famous TryRuby from Why-the-lucky-stuff) to more advanced.
  • Udemy: A platform for letting anyone create a course (with a set of video lectures) and then charge for people to watch it (usually around $30). It's not programming-specific, but there are a few beginner-level courses up on it.
  • Codecademy: A startup that teaches coding through interactive (JavaScript-based) tutorials and rewards you with badges for making progress through the tutorials. They also created CodeYear, a way of subscribing to their lessons one week at a time.
  • Bloc: An online, paid custom tailored curriculum for learning design, web development (ruby), and iOS, including personal mentorship and interaction with other students.
  • Team Treehouse: This startup teaches web design, web development, and iOS development through videos and quizzes.
  • AppendTo: A series of training videos on JavaScript and jQuery, offered by a consulting agency.
  • CodeAvengers: Play a game and learn JS at the same time.
  • iHeartPy: An online lesson in Python, with badges.
  • SkillCrush: A just-launched startup with tutorials on practical web development topics (including not as technical topics like "Beautify your blog") and a daily newsletter with terms of the day.

And of course, there are also sites that attempt to aggregate many of the above sources, so that you don't have to visit them all: CourseBacon and Teach Yourself To Code.

If you try one of these out and get frustrated, don't give up - take a break, try again, ask a friend, or try something else. Programming is hard to learn, and even harder if you're going about it on your own. Good on you for taking it on, either way!

Thursday, April 12, 2012

Theming Tumblr with Twitter Bootstrap

I started a blog recently for EatDifferent and I decided to use Tumblr as the blogging platform, as it has more of a community than other platforms. I wanted the blog to share some of the look & feel of the main site, for consistency's sake, so I choose to make my own custom Tumblr theme instead of using a pre-set theme from the gallery. I also wanted the blog to share some of the look & feel of the Stripe blog, because I think it's just so pretty — the author photos, the outset photos, the ample whitespace. After a few hours of hardcore copying, pasting, and tweaking from the various stylesheets, I achieved my goal. You can see what I came up with on the live EatDifferent blog or in the screenshot below:

Since I figure other people might also want to use Twitter Bootstrap in their Tumblr theme (like if you're already using it for your main site), I spent a few more minutes making a generic version of the theme. You can see it on the demo blog or in the screenshot below:

If you want to use it as a base for your theme, just grab it from this gist and modify away.

Wednesday, April 11, 2012

Using the Instagram API from a Python Flask App

Instagram came out with their Android app last week, and finally I had a solution to the age-old problem: all my home-cooked, healthy meals look like crap when photographed with my Android Nexus S camera, and so my EatDifferent stream was not so enticing. Thanks to the Instagram filters, I can now take photos of my meals that actually look somewhat palatable. I could have just taken photos with the Instagram app and uploaded them via the EatDifferent mobile app, but since Instagram offers an API, I wanted to see if I could use the API to automatically import photos from Instagram into EatDifferent. Well, as it turns out, I could, and it was a fairly easy feat, thanks in large part to the Python Instagram API wrapper. Here's a rundown of how it works.

Authenticating Users

Instagram uses OAuth2 for authentication, which means that my app needs a flow which redirect users to Instagram, gets a token from them, upgrades that to an access token, and then saves that access token for any time it wants to make authenticated requests on behalf of the user.

On the settings page, users click on a button that hits this view and redirects them to Instagram:

@app.route('/authorize-instagram')
def authorize_instagram():
    from instagram import client

    redirect_uri = (util.get_host() + url_for('handle_instagram_authorization'))
    instagram_client = client.InstagramAPI(client_id=INSTAGRAM_CLIENT, client_secret=INSTAGRAM_SECRET, redirect_uri=redirect_uri)
    return redirect(instagram_client.get_authorize_url(scope=['basic']))

Then, when Instagram redirects back to my app, it hits this view which upgrades to an access token and saves it:

@app.route('/handle-instagram-authorization')
def handle_instagram_authorization():
    from instagram import client

    code = request.values.get('code')
    if not code:
        return error_response('Missing code')
    try:
        redirect_uri = (util.get_host() + url_for('handle_instagram_authorization'))
        instagram_client = client.InstagramAPI(client_id=INSTAGRAM_CLIENT, client_secret=INSTAGRAM_SECRET, redirect_uri=redirect_uri)
        access_token, instagram_user = instagram_client.exchange_code_for_access_token(code)
        if not access_token:
            return error_response('Could not get access token')
        g.user.instagram_userid = instagram_user['id']
        g.user.instagram_auth   = access_token
        g.user.save()
        deferred.defer(fetch_instagram_for_user, g.user.get_id(), count=20, _queue='instagram')
    except Exception, e:
        return error_response('Error')
    return redirect(url_for('settings_data') + '?after_instagram_auth=True')

Parsing Posts

As you might notice in the above code, I call a method to fetch the user's latest Instagram posts after I've saved their authentication information. I defer that method using App Engine task queues, since it could take some time, and I don't need to do that while the user is waiting.

In the code to fetch the posts, I only process posts which are tagged with "ED" or "eatdifferent", since there may be times when a user wants to post something other than meals. I also check in memcache if I've already seen this update before processing it. I could also store a max ID seen for each user and do it that way, but given the small number of posts I'm processing on average, I went with the solution which uses more memcache hits but is also more straightforward.

def fetch_instagram_for_user(user_id, count=3):
    from instagram import client

    user = models.User.get_by_id(user_id)
    if not user.instagram_auth or not user.instagram_userid:
        return

    instagram_client = client.InstagramAPI(access_token=user.instagram_auth)
    recent_media, next = instagram_client.user_recent_media(user_id=user.instagram_userid, count=count)
    for media in recent_media:
        tags = []
        for tag in media.tags:
            tags.append(tag.name.lower())
        if not ('eatdifferent' in tags or 'ed' in tags):
            continue
        cache_key = 'instagram-%s-%s' % (user.get_id(), media.id)
        if util.get_from_cache(cache_key) and False:
            continue
        imports.import_instagram(user, media)
        util.put_in_cache(cache_key, 'true')

Subscribing to Posts

Now, I want to know whenever an authenticated user updates a new photo, so that I can import it if it's tagged appropriately. The Instagram API uses parts of the PubSubHubBub protocol to let you subscribe to real-time updates. You can subscribe to all posts with particular tags, but you can also subscribe to all posts by your app's authenticated users, and in my case, that's the lower noise option. (There's a surprising number of folks using the tag "#ED", presumably tagging everyone they know named "Ed").

I only had to setup the subscription once (well, once for the test server and once for deployed), using this code:

    instagram_client = client.InstagramAPI(client_id=INSTAGRAM_CLIENT, client_secret=INSTAGRAM_SECRET)
    callback_url = 'http://www.eatdifferent.com/hook/parse-instagram'
    instagram_client.create_subscription(object='user', aspect='media', callback_url=callback_url)

When my app gets hit at the callback URL, it goes to this view which either responds to the hub challenge (if it's the first time Instagram is hitting the callback URL, to verify the subscription) or if it's an actual update, it verifies its from Instagram and calls another function to parse the update.

@app.route('/hook/parse-instagram')
def parse_instagram():
    from instagram import client, subscriptions

    mode         = request.values.get('hub.mode')
    challenge    = request.values.get('hub.challenge')
    verify_token = request.values.get('hub.verify_token')
    if challenge: 
        return Response(challenge)
    else:
        reactor = subscriptions.SubscriptionsReactor()
        reactor.register_callback(subscriptions.SubscriptionType.USER, parse_instagram_update)

        x_hub_signature = request.headers.get('X-Hub-Signature')
        raw_response    = request.data
        try:
            reactor.process(INSTAGRAM_SECRET, raw_response, x_hub_signature)
        except subscriptions.SubscriptionVerifyError:
            logging.error('Instagram signature mismatch')
    return Response('Parsed instagram')

In this function, I extract the Instagram user ID from the update JSON, find the user(s) that connected with that ID, and once again, set up a deferred task to fetch their updates. I also set a countdown of 2 minutes for that task, as I saw issues where the update would exist in the Instagram but wouldn't have all the data yet (like the tags), maybe a stale data propagation issue on their side.

def parse_instagram_update(update):
    instagram_userid = update['object_id']
    users = models.User.all().filter('instagram_userid =', instagram_userid).fetch(10)
    if len(users) == 0:
        logging.info('Didnt find matching users for this update')
    for user in users:
        deferred.defer(fetch_instagram_for_user, user.get_id(), _queue='instagram', _countdown=120)

And that's pretty much it- it's a fun app and a fun API. Hopefully they both stick around after their Facebook acquisition this week. ☺

Monday, April 9, 2012

Horsing Around Arizona

A few weeks ago, Anton and I took a road trip out west. In our original wild scheme, we were going to road trip all the way from San Francisco to New York… but then we found out that a) most of America is boring and b) renting a car for that long and that far is expensive. So instead, we flew into Phoenix, rented a Zipcar, and drove it to the Grand Canyon and back over four days, arriving just in time for JSConf. So it wasn't really the grand epic adventure we once envisioned, but it was still pretty damn cool. Let me take you on a little photo journey...

We started off our trip with a leisurely drive around Sedona, taking in the cactus-spotted scenery and the impressive rock formations.

IMG_0681 IMG_0684

We realized that we didn't really know anything about the history of the area, so we checked out Tuzigoot and Monetezuma Castle, learning about the badass Sinagua tribes that literally carved their homes into the cliffs - and climbed up and down each day. Impressive!

IMG_0720

Then we drove up to Jerome and checked out their famous ghost town. We got behind the wheel of a few rusty vintage cars, and I even got to feed a real live donkey! We tried to stop for a drink in their saloon, but, well, ghosts aren't so good at serving alcohol.

IMG_0698 IMG_0702 IMG_0713

Next we drove up to Flagstaff, found ourself a cheap motel on Historic Route 66, and used that as our base for a couple nights. From there, we drove up to the south rim of the Grand Canyon and hiked the South Kaibab trail, past the "Oooh-Ahh point" and up til the "Cedar Ridge" point, where we relaxed and took in the surreal, spectacular scenery.

IMG_0749 IMG_0751

After all that road tripping, we were ready for some nerding out. We spent a sunny Sunday at NotConf, where I spoke on my undying hatred for quadruple nested ternary operators and Anton enjoyed some quality hacking time. Then we spent the next few days at JSConf, taking in some amazing talks and meeting the best of the best JavaScript developers. And, yes, we might have taken a few compromising photos of our roommate.

IMG_0790 IMG_0808

What a week! By the time we got onto our airplane ride, I was exhausted and ready to hit the hay. Until next time, Arizona!

IMG_0811

Monday, April 2, 2012

Converting Addresses to Timezones in Python

In a perfect world, we'd all live in the same timezone and it would be the same time everywhere. Unfortunately, we have a sun, and the earth goes around the sun, and the earth is round, and someone invented the concept of time, and now we programmers have to deal with it.

For EatDifferent, I give users the option of getting email reminders — one in the morning, and one at night — so that means I need to know the user's timezone. I could just ask them for their timezone by prompting them with a giant drop-down, but I don't want to make my sign-up form longer, and I have yet to find a user-friendly timezone selection widget. So, instead, I ask for their location, and try to programmatically figure out their timezone from the location string.

When I originally implemented location to timezone conversion, I used an API from SimpleGeo that I could send an address to and get a timezone back. Unfortunately, SimpleGeo was acquired by Urban Airship, and their APIs were shut down this week — so I needed to find a new solution, stat. After searching around for a while, I found a few APIs that convert latitude, longitude coordinates into timezones, but no APIs that convert addresses into timezones — which meant I needed to first use a geocoding API to convert the address into coordinates, and then feed that into one of those timezone APIs. Since I only do this conversion once per user and I needed to use two APIs for one conversion, I wanted to use APIs that were either free or low-cost, transaction-priced.

Addresses -> Coordinates

When I asked on Twitter for geocoding suggestions, I got a few good tips: CloudMade, Yahoo! PlaceFinder, Geocoda, DeCarta, and of course, Google. CloudMade and Yahoo both looked like promising candidates, but since I'm already so familiar with the Google API from my years of actually working on the Google Maps API team, I decided to go with what I know. The Google terms of service requires that all apps using the geocoder eventually display the geocoded coordinates on a map, so I'll add mini maps to user profiles to appease the terms.

To use the Google geocoder from my app, I used this Python wrapper for the API. Here's what my code looks like for geocoding the address string and saving the results to the User entity:

try:
    geocoder_client = geocoder.Geocoder()
    geocoder_result = geocoder_client.geocode(user.location.encode('utf-8'))
    user.country    = geocoder_result.country__short_name
    user.city       = geocoder_result.locality
    user.latlng     = db.GeoPt(*geocoder_result.coordinates)
except geocoder.GeocoderError, err:
    logging.error('Error geocoding location for user %s: %s' % (user.get_id(), err))      

Coordinates -> Timezones

I had a few options for calculating the timezone now: Geonames, AskGeo, EarthTools, World Time Engine. I went with Geonames because it returns an Olson timezone string (which is what Python pytz uses) instead of an offset or other timezone identifier, and because its free for my needs.

I looked around for a Geonames python wrapper, but when I found only old ones, I wrote a really simple wrapper based on the Geocoder API client, so that I could call it in a similar way.

try:
    geonames_client = geonames.GeonamesClient('myusername')
    geonames_result = geonames_client.find_timezone({'lat': user.latlng.lat, 'lng': user.latlng.lon})
    user.timezone = geonames_result['timezoneId']
except geonames.GeonamesError, err:
    logging.error('Error getting timezone for user %s: %s' % (user.get_id(), err))

My solution requires chaining two APIs together, which means double the requests and double the chance of failure, but I calculate the timezone in a deferred task, so that the user isn't waiting for it to happen, and I can set it up so that the task is retried in case of error. So far, the APIs have both been responsive and the solution is working as well as the original SimpleGeo single API call.

After I calculate the timezone, I let the user edit it by creating a dropdown with all the possible timezones (and there are a lot, atleast according to pytz) and selecting the calculated timezone (or if none was found, a default of America/Los_Angeles). Here's the code that creates the timezone dropdown using WTForms:

import pytz
import datetime
timezones = []
for tz in pytz.common_timezones:
    now = datetime.datetime.now(pytz.timezone(tz))
    timezones.append([tz, '%s - GMT%s' % (tz, now.strftime("%z"))])
timezone = wtf.SelectField('Timezone', choices=tuple(timezones))

In the future, I would like to actually present the user with a slick way of choosing their timezone, once I figure out what that looks like. So, how do you deal with timezones in your apps?

Tuesday, March 27, 2012

API Usability Testing

I often joke that API hackathons are basically API usability testing days, the best opportunity for API teams to see first-hand how developers use their APIs and what problems they ran into - so when I was invited to an AT&T API day, I assumed it was another hackathon. But in fact, it was a formal "API Usability Day", structured and designed to find out what AT&T needs to do to improve their developer experience. It was the first I'd heard of any developer-facing company putting together something like that, so I was intrigued to check it out. (Okay, plus they rewarded us monetarily for our time spent).

The Subject Selection

Before being admitted a seat at the event, we answered a survey about our development experience, provided resumes, and gave links to our LinkedIn/Github profiles. The survey asked questions like what languages we usually develop in and what mobile platforms we've used. I imagine they used the survey information to ensure that developers had some degree of prerequisite knowledge and perhaps also to get a range of developers at the testing day. It was a small group, about 12 of us in all.

The API Tests

When we arrived at the AT&T lab, we were each given a seat, WiFi, and power outlets. We used our own laptops, as they wanted each of us to use our development environment of choice - to mimic what it'd be like if we were at home. We were given four packets describing tasks, and instructed to try to finish them without consulting eachother , and if we did, we were to invite them over so they could witness what we discussed. If we really ran into a stumbling block that we couldn't go over (like an API error that just wouldn't quit) we could invite the API engineer over, and they would take notes and videotape as we worked through it. The tasks increased in difficulty, so much so that we had 2 hours alotted for the last one and as far as I know, none of us get through it all.

Here's what we were tasked with:

  • Download the SDK
  • Use command-line to send and receive SMS
  • Built an oauth flow for authorizing a user and reporting their device location to them
  • Build a webapp for users to buy access to the app, then upload pictures via MMS and browse them in a gallery.

We used new APIs in each test, so that by the end, we were familiar with almost all of their API offerings and had seen most of their documentation. The final two tasks also involved us setting up a frontend (which we each did in our language of choice, like Python for me and Go for Anton) but we weren't meant to spend much time on that - just enough so that we could test how the API flow worked for a webapp.

The Evaluation

At the end of each task, we filled out a survey about how we did and what we thought. Some of the questions asked were:

  • Did you complete the task? How hard was it?
  • Would you recommend this API?
  • What would have made the experience of using the API better?
  • Whose API does it better than we do?

Then we had an AT&T engineer come and chat with us more, asking us to describe how we set about doing the task (like what documentation we tried to use and where we got misled), and just getting our general feedback on the experience. We also emailed a zip of our code to them, though I don't know what they'll be doing with that. Oh, and because I'm a documentation-aholic like that, I also took extensive notes in a Google Doc that I've shared with them over email.

We were quite solitary throughout this whole process, trying to solve the problems using our own approaches and encountering our own sets of problems. At the end of the day, we all came together for a 30 minute debriefing where we talked about our general impressions of the AT&T APIs and developer experience, comparing and contrasting them with other APIs, and explaining what would make us want to use or not use AT&T's offering. One of my own suggestions to them was for them to hold an internal hackathon using their competitor's APIs - so they could see for themselves how their experiences differed and learn from them (or steal, whatever works :).

The First of Many?

Product usability testing is now common place, and as APIs are increasingly becoming the product these days, I think that API usability testing like the AT&T day could become more than just a rarity, and I think they can come in many forms. The AT&T day was really about how easy it was to use their documentation — you could also have tests centering around how intuitive an API syntax is, how easy it is to find answers to common errors.

An API usability testing day should not be the only way for API companies to understand their developer experience, of course. API companies should still be learning as much as they can from looking at how developers use their site, what developers complain about in the forums, what features they request in the issue tracker, etc. And as Anton pointed out during the de-briefing, API companies should be using the APIs internally as much as possible and learning from their own experiences. But it's definitely an interesting way to put new developers under a microscope and zoom in on their experience.

If you're still reading... have you ever participated in or organized a day like this? If you have an API, is it something you'd do?

Saturday, March 24, 2012

Working around Android Webkit

I use PhoneGap to output the Android app for EatDifferent, and that means that my app runs inside an embedded Android browser. As I've discovered and re-discover everytime I work on a new version of the app, I am not the biggest fan of the Android browser. And that's an understatement.

Sure, the Android browser is Webkit-based, so it technically supports modern HTML5 elements and CSS3 rules, but in practice, the browser can sometimes struggle with rendering the new shiny CSS stuff, especially when some user interaction causes it to repaint the DOM. It's not just that the browser slows down — it actually fails to re-paint DOM nodes (or as I like to describe it, it "white outs" those nodes). When the whited-out nodes are my navigation menu or form buttons, then my app is rendered basically un-usable. That's a shitty user experience, of course, so as the developer, I want to do whatever I can to make sure that a user doesn't have to experience that.

Unfortunately, these white-outs are difficult to debug. When I run into one, I try to replicate it a few times (so I know what user interactions caused it), and then I start stripping out CSS rules until I can't reliably replicate it anymore. Since the glitches only happen on the device themselves (and not in Chrome, where I usually test CSS changes), I have to re-deploy the app everytime I test a change. Needless to say, it's a slow process. The white-outs are also impossible to programmatically test for, as far as I know, so there's nothing I can add to my test suite to guarantee that changes in my code haven't brought any back.

So, yeah, they suck. But they suck less when you know what to look for and what to change, so here are some of the changes I made to out the white-outs.

But first... detecting Android

I use the same HTML, CSS, and JS codebase for both the Android and iOS versions of my app, and for most of the changes, I only wanted to make them for Android. To do that, I have a function that detects if we're on Android or if we're testing Android mode on the desktop. I can use the results of function in my initialization code to add a "android" class to the body tag, which I can reference in my CSS.

My Android detection function checks by looking at the user agent (which isn't as simple as just looking for "Android", thanks to HTC and Kindle) and as a backup, looking at the device information served by the PhoneGap API.

Note that my Android detection function only checks if we're on an Android operating system, not if we're specifically in the built-in Android WebKit browser. My app only needs to check if it's on an Android OS, since it's wrapped inside PhoneGap and not accessed from arbitrary mobile browsers. If you're writing a website accessible from a URL and want to employ these workarounds only on the built-in Android Webkit browser, then you need a check that looks for the Android OS and a non-Chrome Webkit browser. Thanks to Brendan Eich for pointing that out in the comments.

  function inUserAgent(str) {
    return (new RegExp(str)).test(navigator.userAgent.toLowerCase());
  }

  function isAndroid() {

    function isAndroidOS() {
      return inUserAgent('android') || inUserAgent('htc_') || inUserAgent('silk/');
    }

    function isAndroidPG() {
      return (window.device && window.device.platform && window.device.platform == 'Android' || false;
    }

    return isAndroidOS() || isAndroidPG();
  }

  if (isAndroid() || getUrlParam('os') == 'android') {
    $('body').addClass('android');
  }

The case of the shiny modals

I use the modals from Twitter Bootstrap for dialogs in the mobile app, and I noticed white-outs and other visual oddities would happen when a modal rendered. To workaround that, I overrode the modal CSS rules to reset various CSS3 properties to their defaults so that the browser processes them as never being set at all.

.android {
  .modal {
    @include box-shadow(none);
    @include background-clip(border-box);
    @include border-radius(0px);
    border: 1px solid black;
  }
}

After making that change, I decided to go ahead and strip down the Bootstrap buttons, nav bar, and tabs as well — not because I necessarily knew their CSS3 rules were causing issues, but because I'd rather have a useable app than a perfectly rounded, shadowed app. I later realized that the stripped down CSS conveniently matches the new Android design guidelines quite well, so my changes are actually a performance and usability gain. You can see all my Android CSS overrides in this gist, and see the visual effect of the overrides in the screenshot below.


The case of too many nodes

When my app loads the stream, it appends many DOM nodes for each of the updates, and those DOM nodes can include text, links, buttons, and images. Android was frequently whiting out while trying to render the stream, understandably. I made a few changes to improve the performance of the stream rendering:

  • Stripping the CSS3 rules from the buttons (as described above) plus a few other classes.
  • Implementing delayed image loading. I had already implemented that for the web version of the app, since it is pretty silly from a performance and bandwidth perspective to load images that your users may not scroll down to see, and I discussed that in detail in this post.
  • Pre-compiling my Handlebars templates. This is a change I actually made for the iPhone, after discovering super slow compile times on iOS 5, but it helps a bit on Android as well.

The case of the resizing textarea

My app includes a textarea for the user to enter notes which defaults to a few rows. Sometimes users get wordy though and type beyond the textarea, and I wanted the textarea to resize as they typed. I got this plugin working, but I kept seeing my app's header and footer white out when the textarea resized. I eventually figured out that Android didn't like repainting the header and footer because they were position:fixed (it didn't white out when I made them absolute), and couldn't figure out how to get Android to not white them out. So, I opted here to just make sure the code never resized the textarea while the user was typing by adding the resizeOnChange option, and setting that to true for Android. It's not ideal, but well, that's life.

A better future?

As recently announced, there's now a Chrome for Android which is significantly better than the built-in Webkit that ships with it, and I'm hopeful that Android apps will be able to use Chrome for their embedded WebView in the future (see my issue filed on it here). I look forward to making app design decisions based on making a better user experience, and not on preventing a horrible one. :)