Tuesday, August 31, 2010

How to Pretty-Print Code Snippets in Blogger

I often want to include snippets of code in my instructional blog posts, so to make my blog posts easier to read, I decided to add syntax-highlighting to my blog tonight. There are various syntax highlighting solutions out there, but we use google-code-prettify on code.google.com and it's worked well enough there, so I went with what I knew.

Here's how I added it to my Blogger blog:

  1. Click the Design tab and "Edit HTML".
  2. After the meta tab in the HTML, paste these two includes for the JS and CSS:
    <link href='http://google-code-prettify.googlecode.com/svn/trunk/src/prettify.css' rel='stylesheet' type='text/css'/>
    <script src='http://google-code-prettify.googlecode.com/svn/trunk/src/prettify.js' type='text/javascript'/>
  3. Search for script - for me, there's a script tag near the bottom of the page. In that script tag, put this javascript call:
    If that tag doesn't exist, then just create a script tag at the bottom yourself.
  4. Now, whenever you're posting, add the prettyprint class to your pre or code tags:
    <pre class="prettyprint">
    var i = 2 + 4;

To see examples of where I've used this, check out JSON API for Posterous for Python snippets or the Google APIs Timeline for JS snippets.

For more details on using the prettify library, see the readme.

Monday, August 30, 2010

A JSON API for Posterous

I recently became mildly obsessed with Europopped.com, a blog that highlights both really catchy & horribly tacky music videos from all over Europe, and I've started thinking up mashups to fuel my obsession. So, I looked up the API for Posterous.com, the blogging platform that powers Europopped, and discovered that its API is not quite as mashup-friendly as I hoped. They do offer an API for retrieving public feeds without authentication -- the first thing I looked for -- but the API result output is a custom XML format -- not optimal for client-side mashups. I was expecting to find an API output that was either ATOM-based, so I could pipe it through existing Feed->JS proxies like the Google AJAX Feeds API, or even better, an API output in JSON with support for callback parameters. The documentation indicates the API is still under development, however, so hopefully they will soon go down one or both of those routes.

But in the meantime, I decided to remedy their lack of a JSON output with a quick App Engine app to proxy API requests, convert the XML to JSON, and return it.

First, the end result:

If I wanted to use the Posterous API to get the last 50 posts from the Europopped blog, I'd fetch this URL and it would return XML for each post:


To use my proxied JSON API to get those 50 posts, I'd fetch this URL:

Tip: Install the JSONView extension for Chrome to see the result pretty-printed.

Notice that the only difference is the domain name -- I wanted the proxied API to mirror the actual API as much as possible, to make it easy to figure out the URLs to construct from the documentation, and to make it easy to port to an actual JSON offering from Posterous in the future, on the assumption that actually happens. :)

If I want to get the same JSON wrapped in a callback, to use it inside a webpage, I'd fetch this URL:


Now, the code behind it:

I've checked in the two files it took to write the proxy on App Engine for Python, and I'll step through them here.

First, I set up a URL handler to direct all /api requests to my api.py script:

application: posterous-js
version: 1
runtime: python
api_version: 1
- url: /api/.*
  script: api.py

Then, in api.py, I directed all requests to be handled by ApiHandler, a webapp.RequestHandler class.In that class, I reconstruct the URL for the Posterous API request:

  url = 'http://posterous.com' + self.request.path + '?' + self.request.query_string

Then I check memcache to see if I've already fetched that request recently (in last 5 minutes):

   cached_result = memcache.get(url)
    if cached_result:
      dict = simplejson.loads(cached_result)
      dict = self.convert_results(url)

If I didn't find it in cache, then I'll call a function to fetch the URL and convert specified top-level tags in the XML to JSON:

  result = urlfetch.fetch(url, deadline=10)
  if result.status_code == 200:
      dom = minidom.parseString(result.content)
      errors = dom.getElementsByTagName('err')
      if errors:
        dict = {'error': errors[0].getAttribute('msg')}
      elif url.find('readposts') > -1:
        dict = self.convert_dom(dom, 'post')
      elif url.find('gettags') > -1:
        dict = self.convert_dom(dom, 'tag')
      elif url.find('getsites') > -1:
        dict = self.convert_dom(dom, 'site')

I convert from XML to JSON using the minidom library, converting each tag to a JSON key and recording the text data or CDATA as the JSON value. This technique means that I don't actually convert any nested XML tags, but in the Posterous API, that only means that my output is missing the comments information for posts, which is the least interesting information for me.

 def convert_dom(self, dom, tag_name):
    dict = {}
    top_nodes = dom.getElementsByTagName(tag_name)
    nodes_list = []
    for top_node in top_nodes:
      child_dict = {}
      for child_node in top_node.childNodes:
        if child_node.nodeType != child_node.TEXT_NODE:
          child_dict[child_node.tagName] = child_node.firstChild.wholeText
    dict[tag_name] = nodes_list
    return dict

Finally, after getting the JSON representing the API call, I output it to the screen with the appropriate mime-type and wrap it in a callback, if specified:

      json = simplejson.dumps(dict)
      memcache.set(url, json, 300)
      callback = self.request.get('callback')
      self.response.headers['Content-Type'] = 'application/json'
      if callback:
        self.response.out.write(callback + '(' + json + ')')

It's a quick hack and one that I hope to see replaced by the official Posterous API, but it's cool that it was so easy to do and now I can move on to actually making the Europopped mashup of my dreams. :)

Sunday, August 29, 2010

Girl Develop It: Teaching Web Programming to Women

A few months ago, Sara Chipps and I ended up on the same list of "Hacker Women on Twitter", and I followed the link from her bio to the Girl Develop it (GDI) project. The mission of GDI is to lessen the gender gap on the web by getting more women to develop software, and they're going about that by offering low-cost web programming courses to women in their local area (New York City). Some of those women might become web developers themselves, but even if most of them don't, they will hopefully inspire the women around them (like daughters and friends) to think about going down that road. Of all the various attempts to get more girls in computing, this one is my favorite. It may take time -- it may even take generations -- but it's worth a shot.

So, I'm bringing GDI to Sydney, and kickstarting it with an introductory series on HTML & CSS, which will take place in 5 classes over 3 weeks at our local Google office. The series will be run like an actual course, in that students are expected to attend every class, to do homework, and to do a small final project (a personal website).

I'm currently in the midst of creating the curriculum, using my HTML5-based slides making application, and am hoping to make it fun, practical, and re-usable. :)

Here's where you come in:

  • If you're a local female looking to learn those topics, then you can read more and register from the GDI Sydney page.
  • If you're a local female that's keen to help others learn these topics, then we'd love to have you as a teaching assistant for the course. Just send me an email or wave (pamela.fox@).
  • If you're just curious to see how it goes, then subscribe to the main GDI blog.

Thanks so much to Sara for supporting the Sydney version of GDI. I'm excited to see how this goes and to meet the first class of students!

Wednesday, August 25, 2010

Importing data from Spreadsheets to App Engine

Google App Engine provides the remote_api mechanism for uploading and downloading data from the datastore. It's handy and lets you import different types of data, but requires a certain amount of setup, and well, sometimes I'm lazy and don't feel like going through that setup. So, another way that you can easily import data into your datastore is to store it in a Google spreadsheet, publish the sheet, and write a handler to import the spreadsheet rows as datastore entities.

For example, I created a spreadsheet to store information on Wave extensions, using one column for the URL and another column to indicate if they're featured or not.

Then, I published that spreadsheet using the Share->Publish menu, and constructed a URL for the JSON database-like output:


To get the URL for your own public spreadsheet, just change the spreadsheet key (the long string there) and the worksheet ID (the first sheet is always 'od6').

That JSON includes an array of entry objects, and each entry object contains an object for each of the columns, e.g.

 gsx$url: {$t: "http://api.rucksack.com/hostelwithme.xml"},
 gsx$featured: {$t: "yes"}

Note: Column headers are stripped of whitespace and lowercased when converted to keys in the JSON feed, so I always just start off with them that way in the spreadsheet to make it painless to find them in the JSON.

Now, I write a simple handler that will pull in that JSON, parse each entry object, and convert them into datastore entities.

class ImportAppsActionHandler(BaseHandler):
 """ Handler for importing existing apps."""

 def get(self):
   user = users.get_current_user()
   # Need admin access to import
   if not user.is_current_user_admin():
   # Fetch JSON of published spreadsheet
   url = "http://spreadsheets.google.com/feeds/list/0Ah0xU81penP1dDNwSFROSU5KVlFRbmo5cERsTElKTGc/od6/public/values?alt=json"
   result = urlfetch.fetch(url)
   if result.status_code == 200:
     feed_obj = simplejson.loads(result.content)
     if "feed" in feed_obj:
       entries = feed_obj["feed"]["entry"]
       # Make an Application entity for each entry in feed
       for entry in entries:
         url = entry['gsx$url']['$t']
         featured = entry['gsx$featured']['$t']
         app = models.Application()
         app.url = url
         app.moderation_status = models.Application.MOD_APPROVED

   # Clear the memcache

When I visit that handler, it imports the data, and works both on the local devapp server and the public server in the same way.

There are various caveats to this technique, of course. First, your spreadsheet needs to be published. If you wanted to do it with a private spreadsheet, for more sensitive data, you would need to use the full spreadsheets API and do an authentication dance. Second, your handler is limited to the typical 30 seconds limit for an App Engine request. If you wanted to use it to import many rows of data, you'd probably want to split it up across multiple requests by using the deferred task queue or re-directing with pagination.

But, hey, it was useful for my situation, so maybe it's useful for one more situation out there in the world. :)

Tuesday, August 24, 2010

Tip for Networking at Conferences: Be a Speaker!

Most people don't realize it, but I am incredibly shy -- they don't realize it because I've also spent a long time being shy, and have developed various "workarounds" because I know that it's healthy for me to interact with people and that it's not healthy for me to be a hermit (though tempting).

One of the situations where I find it quite easy for my shy-ness to take over is at conference, where I'm surrounded by hundreds of people that I don't know, and I think perhaps that some of them would be interesting conversational partners, but I haven't the slightest idea who, and how to approach them.

So, I work around it -- by being a speaker. By speaking at a conference, I make it so that there is atleast a room full of people that now have an excuse to talk to me, and I have something to talk with them about. That's a room-full more of people than when I was wandering around aimlessly through the halls before the talk!

Now, I know, it's not possible to be a speaker at every conference you go to. But, many conferences (atleast the cool ones) offer lightning talk sessions which can be signed-up for on the day of the event -- and many people have atleast one interesting or funny topic they can talk about for 5 minutes.

Whether you're a pre-slotted speaker or a lightning talk speaker, try to get your speaking slot on the first day. First, of course, that will mean you'll be able to relax after your talk and enjoy more of the conference, and second, that means that the room-ful of people will know of you sooner, and have more time to strike up a conversation with you.

And, hey, if any of you ever see me wandering around a conference (or sneaking into a bathroom to hide from all the intimidating people), stop by and say "hi". :)

Sunday, August 22, 2010

5lide: HTML5-based Slides Maker

At last week's GTUG campout, a 3-day long HTML5 hackathon, I signed up to be a TA for the weekend. That meant I spent most of my time wandering around answering random questions and helping developers debug their hacks. But, I can't be surrounded by a bunch of people hacking on cool shit and not join in myself -- it's just way too tempting. So, on Friday night, after coming home from the pitches and discovering that drinking 2 Dr. Pepper's was not in fact a good way to avoid jet lag, I stayed up into the wee hours hacking on an idea I'd been brewing for a few weeks.

As some of you know from my posts about Prezi and Ignite, I am a fan of alternative slide formats and presentation techniques. In my work as both a student and a developer advocate, I have made a massive number of Powerpoint presentations, and I do believe there is much room for improvement and room for experimentation. So, whenever I spot a new slide format in the wild, I get excited to try it out myself.

Early last year, the HTML5 advocates started using a set of slides that both showed off HTML5 features and were written in HTML5 - so they could do interactive samples and harness the power of HTML5 at the same time. (And by HTML5, I mostly mean rounded corners and CSS transitions :). They recently created a generic stripped-down version for anyone to modify and use in the HTML5 studio, but I wanted to take it a step further than that. I wanted to be able to store my slide data in a database and pull that into the slides template dynamically, so that I could work on my slide content separate from my presentation and easily create multiple slidesets without coding the base HTML each time. Thus began my hack!

Since I had limited time to work on the app, I looked around for a sample application to start off with. One of the things I love about App Engine (well, atleast the Python version) is that when I find an open-sourced app similar to the one in my head, I can get it downloaded and deployed in just a few minutes. In the google-app-engine-samples project, I discovered the tasks app by the great Bret Taylor. The tasks app lets users sign in and create different task lists, where every list has a re-orderable set of tasks. The similarity to my app design was uncanny, and ridiculously convenient. With some simple search and replace, the tasks app became a slides app, letting users create different slide sets, where every slide set has a re-orderable set of slides. (See what I mean?) Then I added the more slide-oriented features: I turned the generic HTML5 slide deck into a django template that pulled in the data, I added a "theme" option for each slide deck and used a different CSS for each theme ("party", "ballerina", and "android"), and I created a notion of a slide type for each slide (either the intro, transition, or body).

I demoed the app in this form on demo night, and as usual, I haven't had time to add anything else to it since then. I'd like to add an "import from docs" as the next feature, as I have a few slidesets I want to bring over. I also think the slide editing interface could use some love and re-thinking, as it's really just a re-skinned task list editing interface right now. I have open-sourced all of the code here, as I'd love for other people to play around with it and maybe submit some patches (hint hint).

Happy 5lide-ing! :)

Google APIs Timeline: Behind the Scenes

As part of my recent series of talks on the landscape of the Google APIs, I started off with a timeline showing the history of our APIs, from just 10 APIs in 2005 to over 80 now, with many launches and a few deprecations along the way. That timeline, appropriately, is itself a quick mashup of our APIs, and I thought I'd spend a few minutes talking about how I made it here.

First, I needed the data on our APIs over time. Unfortunately, we don't offer an API for querying our API existence over time (as awesomely meta as that would be), so I had to go the painstakingly manual route. I used the Internet Wayback Machine, a website that lets you view cached versions of other websites at various times. We launched code.google.com in March 2005 (and by we, I mean Chris Dibona), and the machine cached many versions since that first version. By looking at a combination of the listings on the apis.html page and the launch announcements on the front page, I could figure out roughly which APIs we introduced when.

Next, I needed a place to store that data. Well, as some of you know, I'm a massive fan of using Google Spreadsheets as a lightweight database, so I created a worksheet with a few columns (date, APIs added) and filled that in as I browsed the code.google.com of yore (and got a bit nostalgic along the way).

Finally, I wanted to visualize the data in a cool way. I had used the Annotated Timeline from Google Chart Tools for the Wave Visualizer mashup last month, and I figured this was another good use of it. It's the same timeline that's used by Google Finance to show stock trends compared to news stories, and similar to the timeline used by Google Trends. Though the timeline itself is a Flash SWF, it is exposed via a Javascript API.

Now, to put it together, I just needed to pull in the spreadsheets data and feed that into the timeline.

I brought in the spreadsheets data using the JSONP technique, dynamically appending a script tag with the src attribute set to the JSON output of the spreadsheets API, and specifying a callback function to be passed the JSON data. Note that I used the "values" projection for the feed, as that treats the worksheet as a database and returns the named columns as values in the JSON.

function appendSpreadsheet() {
  var script = document.createElement('script');
  script.src = 'http://spreadsheets.google.com/feeds/list/0Ah0xU81penP1dE1TNnpscHdYYU5qSU5GZldLM1VMMVE/od6/public/values?alt=json-in-script&callback=onSpreadsheetLoad';

In the callback function, I parse through the rows of the spreadsheets feed and add them as rows to a DataTable object. I then create the AnnotatedTimeline object and ask it to draw that data.

 function onSpreadsheetLoad(json) {
  var rows = json.feed.entry || [];
  var data = new google.visualization.DataTable();
  data.addColumn('date', 'Date');
  data.addColumn('number', 'Total APIs');
  data.addColumn('string', 'Changes');
  for (var i = 0; i < rows.length; i++) {
    var row = rows[i];
    var year = parseInt(row['gsx$year']['$t']);
    var month = parseInt(row['gsx$month']['$t']);
    var day = parseInt(row['gsx$day']['$t']);
    var total = parseFloat(row['gsx$apitotal']['$t']);
    var info = row['gsx$info']['$t'];
    data.setValue(i, 0, new Date(year, month, day));
    data.setValue(i, 1, total);
    data.setValue(i, 2, info);
  var annotatedtimeline = new google.visualization.AnnotatedTimeLine(
  annotatedtimeline.draw(data, {'displayAnnotations': true});

And with that (and some fiddling with the timeline options), I had an interactive timeline:

It's a pretty simple mashup that only took a few hours to put together (99% of which was data collection), and an example of how you can easily combine a couple of our APIs in interesting ways. Happy mashing! :)

Monday, August 9, 2010

Android Painting Apps: A Review

Here is my preliminary review of 3 painting apps designed for the Android platform. Please suggest other ones or other opinions on these.

Note: I am not a professional artist, just an amateur doodler. :)

Differentiating Features:

  • Paint Stroke Texture: Every app seems to implement their own stroking mechanism, so texture varies - and is important.
  • Brush Size Picker: It's nice if you can quickly change/toggle sizes. You cycle through a lot of brush sizes when painting.
  • Color Picker: It's important that you can pick shades of colors, like a human flesh tone.
  • Undo/Redo: It's very easy to make mistakes while using a clunky finger on a small screen, so having an undo is essential to refining paintings.
  • Import (load from gallery): If you can bring in existing pictures from the gallery, then you can fork your previous paintings or paint on top of photos (like drawing moustaches on your colleagues).
  • Export (save to gallery): You always want to be able to save your paintings out, so you can keep an archive of them, tweet them, save them to your online photo albums, etc.

FingerPaint Pro

  • Paint Stroke: Each stroke has a highlight + a shadow to it, like it's really a 3-dimensional stroke. It makes an interesting effect, but also makes it hard to do large blocks of color. I like the challenge of that though.
  • Brush Size Picker: A nice UI that cycles between sizes with a single tap.
  • Color Picker: A full H/S/V color picker with every color possible.
  • Undo/Redo: Not supported.
  • Import: Supported.
  • Export: Supported.
  • Conclusion: I find this paint texture so fascinating that I find myself using this the most. If this app implemented undo/redo, an eye dropper tool, and perhaps a bit of flexibility with the texture (to enable solid blobs of color), then it would really be my favorite app.


Eye of the Mummy
Ice Cream in the Sky

Kids Paint App

  • Paint Stroke: Each stroke is of a random thickness and color.
  • Brush Size Picker: Not supported.
  • Color Picker: Not supported.
  • Undo/Redo: Not supported.
  • Import: Not supported.
  • Export: Supported.
  • Conclusion: It is cute to play with, and it is interesting to see what one comes up with, but it will be frustrating for anyone with a particular goal in mind. (And I think most people acquire a goal after fiddling for a while, and a sudden change in stroke color can so easily squash goals here). I admire the different take, though.



Magic Marker

  • Paint Stroke: Each stroke is actually white, with a colored glow around it (for a neon effect).
  • Brush Size Picker: Clicking on the current brush size pops up other sizes to pick from.
  • Color Picker: 8 swatches and a basic wheel. Shades not supported.
  • Undo/Redo: Supported.
  • Import: Supported.
  • Export: Supported.
  • Conclusion: This app has a specific type of artistry in mind, and it delivers all the basic features needed to make the artists experience a good one. It would be great to see some additional pizazz in the future- like sparkles or brush shapes - but this is a great start for a unique vision.


Magic Mercure


  • Paint Stroke: You can pick a variety of different textures, and some of them produce a nice effect of varying thickness.
  • Brush Size Picker: The stroke adjustment dialog lets you pick a "base width" and "tip width", as well as a transparency.
  • Color Picker: You can basically pick any color ever, using two different types of pickers. You can also use an eyedrop tool to pick an existing color from the app.
  • Undo/Redo: Not supported. :(
  • Import: Supported.
  • Export: Supported.
  • Conclusion: This app is probably the best painting app, because you have so much flexibility in the paint stroke, from texture to color, and with enough layers and patience, you can create some quite detailed paintings. The huge missing feature is the Undo, and with the absence of that, it's hard to go from a good painting to a great painting.


Sunday, August 8, 2010

How to pick & prepare an Ignite talk

I am now somewhat of an Ignite veteran. I've given talks at multiple Ignites, including Ignite Spatial (for GIS folks) and Google I/O Ignite (for hard core developers). I don't necessarily give the best talk, but I usually give a talk that goes over well. So, I thought I'd share my personal technique for creating an Ignite talk, as it can be a bit different from typical talks.

When I think of a topic for an Ignite talk, I look for one with the following characteristics:

  • Something that mixes humor, inspiration, information
  • Something I'm personally passionate about
  • Something that tells a story or has a narrative

Then, once I've decided the topic, I need to actually create the talk. I typically take these steps:

  • Spend a week simply thinking about the topic, hearing yourself talk about it in my head. It's a great way to spend the morning commute, and a great way to make sure that the flow of the talk will be natural once it's down on paper.
  • Create a text document. I usually draft in Wave these days, so that I can share it with my friend and ask for comments, but otherwise any old editor will do - some folks even use real-life things like post-its. The important thing is that it should be a format that allows you to iterate easily.
  • Write down a list of all the bits of information you potentially want to communicate, and order them according to what feels natural.
  • See if you are at or around 20 bits of information, where a bit is about 2-3 sentences (you may want to time yourself to see what your personal pace of communication is, and calibrate that #). If you have too many, think about what point to cut, or what details to leave. If you have too little, think about what else you might communicate - maybe more backstory, a calls to action, or an anecdote.
  • Say this talk to yourself. Say it to a friend. See if it feels natural, that everything transitions together, and that it's similar to the way you might spontaneously tell these bits of information. That will make it easier to deliver.
  • Start thinking about the visualization of your information bits. If you've got 5 bits in a row that you have no idea how to visualize, you may want to re-think those bits. Eventually, each bit will need a picture.
  • Keep playing around with the talk, moving bits of information around, moving them in and out, until you get to a point where it just feels right, and you're at the right # of slides.
  • Create a powerpoint (or keynote) presentation.
  • Insert a text box on each slide that contains the text for that slide. Position it below the slides.
  • Start filling in the visuals. They might be photos that you've taken, images from the web (like iStockPhoto or CC-licensed Flickr pics), or hand-drawn doodles. Try and make them be something that jives with the audience, but also jives your memory about the content of the slide.
  • Add short headers to each slide, if you want. Position them near the top of the slide, as the bottom is often hard for the audience to see. Try to avoid adding any text besides a header -- you don't want to distract audience members from what you're saying, and you don't want to make them feel they're reading off your talk.
  • Move the slide text up to the bottom of the slide.
  • Set the animation settings for auto-advance every 15 seconds.
  • Start practicing the talk. Since the slide text is visible, you get to view the text while practicing. After each practice, go back and make any tweaks to the text, change anything that didn't feel right, went too long, etc.
  • Once you feel comfortable, move the slide text below the slides, and practice the talk that way multiple times. This is the test that most nearly mimics the actual event characteristics.
  • Create a text document with all of your slide text, and print it out. You can now carry it around and practice it to yourself when you have the chance, like at the bathroom at the venue. :)
  • Go to Ignite! Drink water. Give the talk. Be excited.
  • Create a version of the slides that's consumable by non-Ignite attendees by moving the slide text box back onto the bottom of the slides. Upload it to Scribd, Slideshare, Google docs, and share it on Twitter, Facebook, etc. People love to read through these short slidesets, and with the transcript pasted on, the slides will actually make sense.

My past Ignite talks:

Friday, August 6, 2010

Why I Like Prezi

In my life, I have given a *lot* of presentations. In high school, they were presentations on group projects. In university, they were presentations on research projects. At Google, they're presentations on how to use our APIs. When I first started giving presentations, I used Powerpoint, like everyone else. But I kept thinking there must be a better way, and I experimented with other options - flash interfaces, interactive Javascript apps. Then I discovered Prezi, and it has become my presentation tool of choice.

Prezi is an online tool for creating presentations — but it's not just a Powerpoint clone, like the Zoho or Google offering. When you first create a Prezi, you're greeted with a blank canvas and a small toolbox. You can write text, insert images, and draw arrows. You can draw frames (visible or hidden) around bits of content, and then you can define a path from one frame to the next frame. That path is your presentation. It's like being able to draw your thoughts on a whiteboard, and then instructing a camera where to go and what to zoom into. It's a simple idea, but I love it. Here's why:

  • It forces me to "shape" my presentation. A slide deck is always linear in form, with no obvious structure of ideas inside of it. Each of my Prezis has a structure, and each structure is different. The structure is visual, but it supports a conceptual structure. One structure might be 3 main ideas, with rows of ideas for each one. Another might be 1 main idea, with a circular branching of subideas. Having a structure helps me to have more of a point to my presentations, and to realize the core ideas of them.
  • It makes it easy to go from brainstorming stage to presentation stage, all in the same tool. I can write a bunch of thoughts, insert some images, and easily move them around, cluster them, re-order them, etc. I can figure out the structure of my presentation by looking at what I have laid out, and seeing how they fit together. Some people do this process with post-its, but I like being able to do it with a digital canvas.
  • It works well for explaining concepts that make more sense with a diagram, because you can basically make your entire presentation be the diagram — and then you can just fly around that diagram, pointing out the flow and zooming into the important bits. For example, I used pseudo diagrams inside the robots API and gadgets API prezis to explain the API <-> server interaction.

Prezi isn't perfect, of course. I've made several feature requests, and have a list of others. There are the small requests, like wanting higher quality images and a code style for embedded snippets, and there are also the big picture requests. I want to be able to link prezis inside of prezis, so that I can make "components" of presentations, and easily zoom from one to the other. I want to be able to create multiple paths for a given prezi, so that I can skip over bits for some audiences, and emphasize them for others. Hell, I'd love if I could have one canvas that had every bit of possible content on a topic, and I could use that one canvas for 20 different presentations. I don't know that Prezi will ever implement these ideas, but I have more faith in them doing so than any other presentation tool, since they are making a point of thinking different.

Generally, I just love the fact that somebody is thinking different about presentation tools. There are so many people out there that are giving presentations every day, so as a society, we owe it to ourselves to invest more thought into how we do presentations. That's why I also like events like Ignite and Webjam, because they are challenging people to rethink the timing aspect of presentations.

(Not to talk about Wave all the time, but—)

My love for Prezi is similar to my love for Wave. Prezi is re-thinking presentations. Wave is re-thinking communication. Prezi gives me a blank canvas that can turn into a presentation of any shape. Wave gives me a blank wave that can turn into a document or conversation of any structure. Both of them are unfinished, but both have a bright future, and even if they don't succeed, they're successfully challenging the traditions of today.