Wednesday, December 29, 2010

Why I Love(d) Wave

Note: I originally wrote this post in August, but decided not to publish it until now. I've left the text as it was then.

As recently announced on the Google blog, Google has decided to discontinue Wave as a standalone project. As many of you know, I worked on Wave for the last year, as an advocate and support engineer for the APIs, and well, as you can imagine, I'm quite disappointed right now.

I know that one of our failings in growing Wave usership was the inability to properly articulate what Wave is good for, and I think part of the reason is that Wave can't be simply compared to any existing service. Wave is a flexible and generic platform, one that can be used for groups discussion, for diary entries, for event commentary, for surveys. Of course, there are existing solutions for doing each of those things - but there is not one solution that does all of them. The thing I loved about Wave is that I could start a wave about a topic, and that wave could evolve from a survey to a discussion to a photo album (like when I asked folks what color I should dye my hair next), or be a combination of all three at once. I didn't have to know at the beginning what it would be, I didn't have to carefully weigh all the different options, I could just start a wave and see what it became. There are of course reasons for using specialized tools, like when you absolutely must have pristine formatting in a Word Document, but when you're collaborating with your group of colleagues or community of developers, I've found it's better 99% of the time to have the flexibility of Wave than the specialized features of 10 different services at once. That's why I will find it hard to stop using Wave (and I hope I don't have to), because I would have to replace it with myriad different tools.

Because Wave is this flexible, we realized from the beginning that it should also be extendible, so that users could customize Wave for their own particular needs, and combine its conversational features with their own processes. Sure, you can do event planning with just text, but throw in a date picker gadget, RSVP gadgets and maps, and you've made planning that much more compelling. You can write your blog post drafts with just the native features of Wave, but after you throw in an approval gadget and blog post publishing robot, you've got a full blog post workflow in Wave, one that can be easily shared with your colleagues.

On the Wave engineering team, we had started to build extensions to streamline our own internal processes. For example, we have a system on the team where a different group of people are on-call each week, and it is their job to respond to the pages and alerts about server issues, and escalate them to other folks if they can't handle them. To supplement this system, we would create an on-call wave each week, and the on-call engineers could paste the alerts into that wave, ask questions about them, and anyone following the wave could discuss the alerts with them. To enhance this system, one of my colleagues wrote a robot that 1) created the new on-call wave each week on a chron job, adding the relevant people based on calendar entries, 2) added links to all the current server admin pages, and 3) pulled the alerts directly into the wave. So, now, we have what amounts to a semi-automated system, where a piece of software does what it's good at (bringing in structured data), and humans do what they're good at (conversing on top of that data). It really shows off the power of Wave to combine structure plus free-form conversation; to combine data plus humans.

In Wave developer relations, we built multiple extensions to enhance our interactions with external developers. For example, we have a gallery of third-party extensions in Wave, and since we wanted that gallery to contain only user-friendly extensions, we created an App Engine app for developers to submit their extensions to the gallery. The app worked okay for storing the structured data about the pending submissions, but it was not conducive to having lengthy discussions about various aspects of an extension. As soon as I got the chance, I moved the process to Wave. Now, developers create a new submission wave, a robot fills it in with questions and fields, they answer the questions and click a button to share it with the review team. We add our own comments, and then click a button that physically puts it into the gallery. With the new wave-based process, we can get into inline discussions about their answers, I can start up private replies with my colleagues to ask for their unfiltered opinion, and we can attach images to show what we see or would like to see in the UI. We ended up with much better extensions, because we were able to have such fast and meaningful conversations -- and we didn't lose any of the structured data along the way.

My job is all about communication and collaboration, and that is what Wave excels in. The more I used Wave, the more I saw how Wave could make my job better, and the more I fell in love with the power and potential of Wave. I hope that the ideas of Wave keep going, whether through other Google products or the ever-growing Wave open-source community, because I don't want to live in a world without them.

Tuesday, December 28, 2010

Triaging Issues: The Wave-y Way

Note: This blog post was originally written to be posted on the Google Wave blog, but did not make it out before the cancellation of Wave. I am posting it here in case it is useful to Wave open-source developers in the future.

In developer relations, one of our core roles is to respond to the bugs and feature requests for our APIs, and we do it both because hearing what developers want helps us shape our APIs, and because understanding the bugs that developers encounter is one of the best ways to deeply understand an API. As a general rule, we try to respond to all new issues within a week of their filing, and we usually accomplish this by holding a weekly triage meeting with colleagues to review new issues. Back when I worked on the Maps APIs, we were all in the same office, so we'd just crowd around a table with our laptops and mash through the bugs. But, now, on the Wave APIs team, my colleagues are spread entirely across the world, with me in Australia and them in both coasts of the United States. So, we needed a new way to triage, and being on the Wave team, we went the obvious route: use Wave!

For our first weekly meeting, we simply met in a wave and pasted the untriaged issues ("Status-New") from our Google code issue tracker into the wave. Then we'd look at the issues and reply beneath the ones we were checking out to lay dibs on them, so we worked on whatever bugs were most up our technical alleys. While we were researching a particular bug and had questions about it, we could start an inline conversation beneath the bug, so in the end, we could both work in parallel and in collaboration. This way, we were able to both triage many bugs and feel confident in our triaging, since we could bounce ideas off colleagues easily during the process.

After the first meeting, I started thinking that I could probably whip something together with our APIs that would streamline the triage process even more, and maybe make it possible for other teams to follow the same process. Around the same time, the project hosting team announced an Issue Tracker data API - perfect timing!

So, with the powers of the Wave Robots API and Issue Tracker API Python client libraries combined, I wrote the Bug Triagey robot. When you add Triagey to a wave, the bot titles the wave with the current date and inserts a configuration gadget. That gadget lets you pick a triage template, or create a new one. In a template, you just need to specify the Google code project, the Status label, and optionally, a label to sort by. For example, my Wave APIs template specifies "google-wave-resources", "New", and "ApiType", as we try to categorize our issues by sub-APIs, and each of us specialize in different APIs.

After you're happy with the template, the bot loads the sorted issues into the wave, and puts dibs buttons under each one, so you can indicate if you're "Looking at" or if you've "Responded to" an issue. After clicking a button, Triagey changes the button to show your username, and after clicking the "respond" button, Triagey strikes the issue out. That way, it's easy for you to review the issues in the wave and immediately see which issues haven't been triaged yet, and who's working on what. And just like before, you can start inline conversations to discuss the bugs you're working on.

Triagey is one of my favorite examples of how you can use Wave to combine automation and conversation to make collaboration easier and more efficient, and of how you can hold multiple structured conversations in a wave. To try it out yourself, install it from this wave. To modify it for your own use (like to triage other items) or to see how it was built, grab the code from the samples gallery.

Using App Engine to Turn Emails into waves

Note: This blog post was originally written to be posted on the Google Wave blog, but did not make it out before the cancellation of Wave. I am posting it here in case it is useful to Wave open-source developers in the future.

When I first discovered Google Wave and started working on its APIs, I realized that Wave had the potential to be a universal inbox one day, using the robots API to collect notifications from all the different inboxes and services that I use. I've since realized that many legacy services don't provide notification APIs but many of them do provide a common integration point - email - letting you specify an email address, and then sending new information to that address. Well, as it turns out, it's very easy to create an app in App Engine that can receive email at particular addresses, and then process that email. Using this technique, I can realize my dream of a universal inbox, by simply forwarding all my services to an AppEngine-powered email address, and appending the email content to a wave in my account.

So, I created the Mail Digester bot to do just that, and open-sourced the code for all of you to base your own mail-receiving robots on. The app starts with a handler for all inbound emails, so it receives emails sent to any [address]@maildigester-bot.appspot.com. It then creates or finds the digest wave for the particular address, and appends a blip with the content of the email, converting any HTML emails to text. For example, if I send an email to "myemailupdates@maildigester-bot.appspot.com", and the app can't find any digest associated with "myemailupdates", it will create a new digest wave and add my wave address. The next time that I send an email to that address, it will find the wave in the datastore, and append a blip to it with the converted email. I wrote it this way so that I could easily create different addresses for different purposes and subscribe to a few different waves.

Once I created the robot, I set to work seeing how many services I could bring into Wave via email notifications. First, I set an address as the issue notification email for the google-wave-resources Google code project, so I could have a digest wave for all the issue updates. Next, I added another address as a member of the Google-Wave-API google group, so I could have another digest wave for new group messages. Finally, I set another address as a Gmail forwarding address, piping all my inboxed emails into a third digest wave. I still need to pop out to Gmail or other services when I need to reply, but in fact, now that I do all my team and community collaboration inside Wave, I find I only need to respond once a day or so.

Using the same principle of using a robot to respond to emails, several of my colleagues have built robots to respond to specific types of notifications. For example, one Wave engineer wrote an on-call robot that parses incoming pages from email and appends them to our weekly on-call wave, so that it's easy for us to discuss the contents of the page. Another food-loving colleague wrote a bot that turns the daily menus into waves, adding images of the food and rating widgets.

Though it at first feels odd to be using email as an integral part of a wave robot, responding to email is actually a great way to integrate Wave with existing services in your organization and even add a collaboration layer on top of them.

Issue Tracking in Wave

Note: This blog post was originally written to be posted on the Google Wave blog, but did not make it out before the cancellation of Wave. I am posting it here in case it is useful to Wave open-source developers in the future.

Like most folks in the software development field, we spend a fair bit of our time on the Google Wave team tracking bugs and feature requests. At Google, we use an internal tool for tracking issues, and that tool lets us set some initial info about the bug, choose its status from a set of options, set its priority level, and assign it to a colleague. From there on out, we can append comments to the issue to discuss it further. The tool works well as a way of tracking the status of bugs across many projects and people, but it doesn't work as well for working through bugs quickly, or for having more complex conversations on requests. On the other hand, Google Wave, works really well for collaborative debugging, since you can easily paste code in, attach files, and comment on each other's code, and of course, it works great have for having conversations that diverge into multiple threads. I really wanted a way to have the best of both worlds: the structured input of our tool, plus the collaborative conversations of a wave. So, we worked with a team to create the Issue Tracky extension to do just that.

When you install Issue Tracky, you'll see both a new item in your "New Wave" drop down and an icon on your toolbar. You can use the former to create a new issue report from scratch, or you can select text in a wave and click the toolbar icon to turn it into an issue. In either case, a robot will fill the wave in with a title and info gadget, where you can pick the issue type, priority level, and assign the issue to a colleague. From there, you can add a description using all the Wave editing tools you're used to, attaching files or pasting formatted code, and then add your colleagues to the wave to discuss the issue.

The robot will automatically tag the wave based on the information in the gadget, so that you can create saved searches for all of the bugs assigned to you (e.g. "type-bug assignee-pamelafox") or all of the high priority requests ("type-request priority-1"), and easily browse through them in weekly reviews or triage meetings.

We've created a generic version of the Issue Tracky tool that anyone can play with, but we've also open-sourced the code, because we think that this is the kind of extension that companies will want to both use and customize for their own use, perhaps by integrating it with existing systems or changing the gadget to collect different pieces of information. To get started with your own version of it, download the code and follow the instructions in the readme. If you have any questions or want to share how you've extended the codebase, stop by the forum.

We hope this helps you have more productive discussions on the issues in your projects, and better collaboration with your teams. Enjoy!

Google Shared Spaces: How We Made It

In my last post, I talked about why we made Google Shared Spaces. Now I want to talk about how we made it, as I think it may surprise a few folks. Much of the press about Shared Spaces claims that it is built off "Wave technology", when in fact, the only piece of Wave technology that Shared Spaces utilizes is the Javascript Wave Gadgets API. Since what we set out to make was really pretty simple, we decided to start from scratch and use a combination of open-source frameworks and publicly available APIs.

We use Python App Engine as the hosting platform, and Django 1.0 for the templates & request handling. We embed the gadgets in the pages using the iGoogle Gadget Renderer (a deployed version of the open-source Shindig server), the gadgets communicate with the page using the Wave Gadgets JavaScript API (now open-sourced), and the page communicates with the server using the Channel API (App Engine's approach to COMET). For AJAX processing and UI features, We use the Google-hosted jQuery. For authentication, we use the Twitter API, Yahoo API, and Google Buzz API. For the comments and ratings in the gadgets gallery, we use Disqus threads.

By starting from scratch while reusing the relevant technologies out there, we were able to build the first version pretty quickly (within a few weeks), and we're also able to iterate quickly now (new releases every couple days). While the open-sourced codebase for Wave is around 250,000 lines of code, the codebase for Shared Spaces is now about 5,000. Wave is a complex technology encompassing multiple algorithms, backends, and interfaces, while Shared Spaces is a user-facing app that uses just one small part of that technology and is developed by a small team. If you pick the right tools for the right job, you can do a lot with a little. :)

Google Shared Spaces: Why We Made It

Last week, just in time for the holidays, we released a Google Labs project called Google Shared Spaces. "We" is actually just a few people working in our 20% time: Douwe Osinga - former Wave API tech lead, Jon Tirsen - former Wave contacts tech lead, Vadim Gerasimov - former Gadgets API engineer, and myself - former Wave API developer advocate. As you can see, we all came from the Wave team, where we saw firsthand the ways that users used Wave and the ways that developers used the Wave APIs. Wave was a lot of things to many people, and there are a lot of directions that you could take the ideas in it. After Wave was cancelled by Google, a small team has been working on open-sourcing it for the past 5 months, so developers can start building off the Wave technology stack, as a whole or as bits & pieces, to take it in the direction of their vision.

The four of us were personally interested in seeing what we could make with just the gadgets part of the stack. The Wave Gadgets API is a simple but powerful API -- it combines the open-source gadgets API with a basic JavaScript API for modifying a shared state (hash map) & retrieving participants information. With just a shared hash map, developers were able to make an astonishing array of gadgets - multi-player games, collaborative diagramming apps, date-picking utilities, even a Flash-like animation tool. In fact, developers made gadgets that were basically full-featured webapps, and they would take up the entirety of the Wave interface when you were using them. Developers weren't using Wave for its conversational abilities at that point -- they were just using it for the collaborative space that it enabled. Some of the developers even added a chat feature to their gadget, as users didn't want to have to scroll down past the gadget to converse with the other participants.

The Diagram Editor gadget:

When we saw this happening while Wave was still an ongoing project, we thought of various ways we could improve the Wave UI for these app experiences: full-screen modes, split scrollbars, chat mode. But with Wave cancelled, we thought about how we could start from scratch to create an experience that centered around these collaborative gadgets. At the same time, we wanted to experiment with other aspects of the Wave experience, like the sharing & permissions model. And thus, Shared Spaces was born.

Shared Spaces is entirely centered on the gadgets. The landing page is a list of featured gadgets, and each of them offers a button to "Create a space". Once you click that, you're prompted to login with either your Google, Yahoo!, or Twitter account. (We didn't see any reason to limit authentication to just Google, particularly since people might want to use Shared Spaces to collaborative with communities outside their Google sphere). You're then taken to a "space" with a list of participants (just you, to start), the selected gadget, a chat area, and a bunch of share buttons. Once you share the URL with other folks, whether via email, IM, or microblogging, you'll see the other participants show up in the list, and all of you can collaborate on the gadget together - whether that's a game of Sudoku, an RSVP gadget, or a drawing board. Much of the experience is similar to the Etherpad experience, where you can create a collaborative pad with one click, chat on the side, and share that pad by URL; I sometimes think of it as "Etherpad for gadgets."

The Yes/No/Maybe gadget:

We've launched Shared Spaces with what we deemed the minimal features necessary because this is very much an experiment; we want to see how users use this, what they want out of it, and what direction it may go in. Perhaps it will become a full product, or perhaps it will be integrated into existing Google products. And hey, maybe it will inspire non-Google companies to use similar technology in their products. The web is increasingly about collaboration, and I think it is always a good thing to experiment in how all of us can make collaboration easier. :)

Sunday, December 26, 2010

The Costa Rica Surf Camp Experience

I was born in California, but after an unfortunate decision from my dad, I ended up growing up in the cold confines of Syracuse, New York. I huddled for warmth in the glow of our many computers and eventually became a computer geek like my dad, but I often wondered what I would have become had I grown up in southern California instead -- and because it just looks so awesome in the movies, I always imagined myself as a surfer chick. I'm pretty happy as a computer geek these days, but I like to keep my options open, so I figured I should eventually take the first step to surfer chickdom: learning to surf.

Even though I've lived in California multiple times in my life, and lived in the surf kingdom of Sydney for the last two years, I somehow have never managed to get my ass on a surfboard. At one point, I figured it was just one of those things that I swore I'd always try and never actually do. (You gotta have a few of those). But in November, I serendipitously discovered that my colleague had the same wish -- and the same amount of vacation days -- and within a matter of weeks, we had arranged a weeklong trip to the Green Iguana Surf Camp in Costa Rica. We were deciding between Mexico, Australia, and Costa Rica, but eventually picked the latter because a friend recommended it, and because I've been to Costa Rica twice before & have something of a massive crush on that country. (Plus, I miss speaking Spanish - it's a beautiful language and there aren't many opportunities to practice it in Sydney).

We arrived in the surf camp in Costa Rica on December 12 and left on December 20th, and we had an amazing time in that week. We of course spent several hours each day learning to surf -- practicing the basics of the "pop-and-hop" -- and washing off in the local waterfalls after (they're as common as pubs are in Sydney). But we also spent a lot of time just enjoying the local culture, like:

  • The food: We'd usually start our meals with an appetizer of "patacones" (triple fried smashed plaintains) & guacomole, then continue on with a "casado" (rice, beans, and a protein) or a full fried red snapper. After surfing, we'd visit a street stand & pick up a ceviche -- raw fish that cooks itself from the acids in a cup of lemon juice. On "Tuna Tuesday" at the local pub, we had seared tuna in wasabi sauce, and it was the most delicious tuna that I've ever tasted - it melted in my mouth. Apparently Tuesday was the day they received the fresh tuna, and they wanted to sell it while it was fresh. It sure was! For snacks throughout the day, we'd walk up to our favorite fruit shop and have them cut up a pineapple (with sprinkled salt on it!) or a mango.

  • The wildlife: We saw lizards everywhere we went, including the largish "Ctenosaurs" which enjoy sunning themselves on hotel & restaurant roofs (and have an awesome dinosaur-sounding name). We also visited the local reptile park, Reptilandia, and wandered around it drinking Rum & Cokes from a can & marveling in a rather tipsy way at the rather large reptiles ("OMG ANACONDA!"). Thankfully, we didn't actually see much sealife during the surfing lessons, besides the crabs scuttling to-and-from all the holes in the beach. I was happy to imagine that I was surfing in a scary-animal-free zone. :) During one visit to a fruit stand, we found ourself feeding all our banana to a very adorable but aggressive Pizote (he showed us both his claws & his puppy dog face). On our last day there, we went horseback riding on a beach, and listened to the local birds calling in the trees (there's one that makes a sound like laughter, and it basically sounded like he was laughing at us the whole time.. as if I wasn't already lacking in confidence).

  • The nightlife: I was surprised to discover that our little town of Playa Dominical actually had a pretty thriving nightlife. Our first outing was to a Karaoke night, where the locals sang love ballad after love ballad (I say "love", but in fact most of the lyrics of the songs were much more about jealousy and rage), and we sang American classics, like "Total Eclipse of the Heart" and "I want to hold your hand". Our next outing was "Ladies Night," and the club was absolutely packed. The DJ, who we'd happened to befriend on our first day, played a mix of reggae, hiphop, house, reggaeton, and salsa. I loved dancing to that blend of genres, and I even had fun when a local led me in a salsa dance (I don't usually do partner dancing). We went out a few more times after that, and it was always a really fun atmosphere.
  • And most importantly...

  • The people: Before we embarked on this experience, we thought it would mostly just be us two, and that we'd do a lot of solitary reflection and all that deep stuff. But pretty much as soon as we arrived there, we met the crew that would occupy our days going forwards: our fellow surf campers - about 7 people our age and a family with highly entertaining 8-yr-old & 3 yr-old-boys, our surf instructors - 6 guys from ages 15 to 55, plus the family that runs the camp. We had a lot of fun learning to surf & experiencing the local culture with that crew, but in addition to them, we also met many friendly locals -- like the boy that cut up our fruit every day or the taxi driver that snacked with us after our horseback ride.

In short, I loved every minute of the surf camp experience, even the ones where I was battling the saltwater in my eyes & trying desperately to catch a wave, and I'm so thankful to the people of Playa Dominical that made it a warm & welcoming place.

And, oh, yeah, I found out that I'm not that great of a surfer. Back to geeking I go! :)

Sunday, December 5, 2010

Reuseable HTML & CSS Teaching Materials

When I decided to bring GDI to Australia and kickstart it with an HTML & CSS mini-course, I had a side goal: create reusable teaching materials. There are a huge number of online resources to help you learn web development, but there are few bundled sets of resources that can be used together to actually teach a topic to a class of students. (Or, atleast, few that I could find). I set out to make materials that I could CC-license and share with potential teachers, so that they could focus on customizing and delivering the materials instead of preparing them from scratch (which takes a surprising amount of time).

Since delivering the materials a few months back, I have cleaned them up and theyre now online here, at a nice friendly URL:
http://www.teaching-materials.org/htmlcss/

To see what's there, you can click through the various lesson links. To actually get started re-using the material yourself, you can download the linked zip file, or if you want an easy way to host it online for your class, you can even download the website files from github and deploy them to App Engine or your own server.

If you do end up delivering the course (or some version of it), I would love to hear about it. We need more HTML & CSS teaching in the world!

Friday, November 26, 2010

DIY: Bleaching Dark Brown Hair

As many people know, I am a fan of coloring my hair. It's a form of self expression and all that good stuff. Since my hair is naturally a thick dark brown (shh, don't tell my colleagues, they think I have no natural hair color), I have to bleach it before dying it most colors (except black). I like to DIY things, so here is how I bleach my own hair.

First, when it comes to bleaching, I do not use one of those kits with pretty people on the front. The kits are handy because they come with all the supplies you need and very specific instructions, but I find that they are not very strong - even the ones that purport to be the strongest.

Instead, I start with bleach powder and 40 volume creme developer. The "volume" refers to the strength of the developer, and "40 volume" is the strongest that you'll find. I currently use L'Oreal's Creme Developer and Quick Blue Powder Bleach. At the suggestion of a local hairdresser, I also mix in a packet of L'Oreal Super Blue Creme Oil Lightener.

I then put on cheap latex gloves and mix 1 part powder with 2 parts developer in a tupperware container (which I only use for bleaching!). If you don't mix enough, no worries, you can easily mix more later.

Then I rub the mix over my hair, starting at the ends. Instructions always have you do the roots last, and as it turns out, that's because the chemicals process faster when they're close to the head, because your head is so warm and heats them up.

When I think that I've got everything covered (don't forget the back of your head!), I cover my head in a piece of aluminum foil and watch TV for 30-60 minutes. I'm used to the slight burning sensation of bleach so I tend to let it stay on for longer, but if it bothers you, you can rinse it out after just 30 minutes.

After I rinse and dry it, I check out how white it became, and if I missed any spots. If I do find spots that are quite brown, then I may wait for it to dry and re-bleach.

If there are no brown spots but it is still a bit yellow and I am trying to dye it blue or go for the blonde/white look, then I wait for it to dry and then put a purple toner on it. A purple toner is basically a light purple hair dye that counters the natural yellow hues in human hair. I currently use Wella Color Charm Liquid Hair Toner.


If I care about maintaining the whiteness of the color, then I sometimes invest in toning shampoo, which is basically like shampoo with a little purple hair dye in it. I currently use Clairol Shimmer Lights.

I also occasionally use an ultra moisturizing conditioner, whenever my hair starts to feel particularly dry and over processed. My current favorite is L'Oreal Mega Moisture.


And that's it... happy bleaching!

Tuesday, November 9, 2010

GirlDevelopIt Sydney: Round 1, A Success!

As I posted in August, GirlDevelopIt is an initiative to increase the number of women in tech through low-cost programming workshops. It was created in New York and is thriving there (on their 27th class now!), and I wanted to try bringing it here to Sydney, Australia.


Our tireless TAs
We started here with the basics, a 5-lesson course on HTML & CSS, with the hope of expanding to more topics if there was enough interest. We ended up filling the room with 40 eager female students from varied backgrounds - like marketing, travel, advertising, and photography - plus 6 super talented teaching assistants of various expertise - like SEO, startups, and standards. At the end of the course, students put together their own personal website to show what they'd learnt, and it was awesome seeing the unique webpages that each of them put together.

All in all, I would call this experiment a success, and I'm excited to see the momentum continue. We have an upcoming lecture with 30 RSVPs, we have a new offer of sponsorship (thanks to ThoughtWorks), and more importantly, we have 60 members in our meetup group who are all ready and willing to become women developers.

So, if you're keen and looking to help, here's a wishlist for things that would be awesome:

  • We could use spare laptops for the workshops, if you have any old ones lying around. They typically just need a web browser like Chrome and a text editor like Notepad++.
  • We would love for a hosting company to provide students in the courses with FTP accounts and a teeny amount of disk space. We used my server for the last round, but we couldn't do more than 8 simultaneous logins on mine, so it was not ideal.
  • We can currently get 10 free books from O'Reilly for each course, but if we had a sponsor (like a bookstore) that would provide free books for every student (~40), that would be just amazing.
  • We would love to have GDI branded t-shirts to give to the students, to help them feel proud of their involvement and to spread the message.

We can always use more women students and teachers, of course, so join the group if you're keen to get involved. Onwards and upwards. :)

Thanks to Kate Carruthers for the embedded photo.

Monday, November 1, 2010

"No Boys Allowed"...And Why I Like It

We just wrapped up our first Girl Develop It course in Sydney tonight. When I was first planning the course, I had males ask if they could be students and TAs, and after some consideration, I said no to them.

Part of me wanted to prove that we could pull it off with an all female ensemble. We ended up enrolling 40 female students, bringing us to full room capacity (daisy-chained power cords, ftw!), and enlisting the help of 6 highly skilled female teaching assistants, from web standards wizards to JS experts. I thus concluded that lack of "womanpower" was clearly not an issue.

The other part of me wanted to see if we could indeed have a better learning environment by having it be all female, as we suggest is the case on the Girl Develop It website. I was the teacher in this course, so I can only give my perspective from the front of the room. But, I have to say, I liked it. I am a straight woman, so when I am giving talks to the mostly all-male crowds at most tech events, I sense a small part of me is trying to impress a small part of them ("that way"). It's not something I'm very conscious of, as I'm usually fairly empassioned by the ideas my talk, but it is there nonetheless. When I am speaking to a group of all females, I am motivated only by the desire to educate them and not by any hidden desires. I played the part of the teacher in this course, but at the same time, I am also a student in an Afro-Brazilian dance class which is largely female. Similar to my reasoning for enjoying the absence of boys in the web dev course, I find that I enjoy the dance classes more when it is just us girls. I can shake my hips without worrying subconciously about impressing the boys in the class and having my performance affected by subsequent nervousness.

On a related note, it's nice to be in an environment where we can talk girl stuff and bring up "risque" topics without worrying about making boys feel awkward or wondering if they'll misinterpret our language. In dance class, we often make up rhymes about our "boobs", "hips", and "asses", and they help us learn the move... but it always feels a bit odd to teach them to boys too. That sort of thing doesn't happen as often in the web development course situation (well, maybe during the after-drinks :), but it's nice to have that kind of environment just-in-case.

Finally, it's cool to meet local women. I have to admit that I'm not that great at making friends with girls (I grew up more around males), so I typically only make them when I'm forced to. Being in an entirely female room definitely helps as a forcing function. :) I met a bunch of awesome girls during this course and the dance class who I probably wouldnt've met otherwise, and I'm looking forward to seeing more of them.

I know there are people who may argue that it is sexist to not allow boys into the classes, but I think that if you are going to go the "no boys" route, you should go all the way or you risk losing some of the benefits completely. This doesn't mean that I think every thing should be all girls - I will be actively encouraging the GDI students to come to mixed meetups, workshops, and user groups. It just means that I do see benefits to single-gender groups in some situations, atleast from my own personal perspective.

Thursday, October 21, 2010

An Unofficial Guide to Geeking It Up Around Sydney

Before I came to live in Sydney, I actually spent a day googling for information on the Sydney developer scene, learning about the local startups, user groups, and mailing lists. I didn't know anyone in Sydney, and I wanted to make sure I both had a way to get to know locals (make new friends!) and also figure out what sort of Google developer events I should organize. In just my first weekend in Sydney, I attended BarCamp Sydney 4 and met many of the people that I'm friends with today, and since then, I've attended something like 50 local meetups, "drink-ups", hackathons, and conferences.

I think it's really fun to attend events from a get-to-know-others perspective and really useful to attend events from a learn-what-others-are-doing perspective, and I want to make sure every Sydney developer is aware of the events going on around them. So, for last night's Girl Geeks Dinner lightning talks, I put together a short preso on the local developer scene, and I've embedded it below. Hope to see you at an upcoming event!

Wednesday, October 13, 2010

Sydney International Food Festival Maps

Every year, Sydney has this awesome International Food Festival filled with food events and deals on meals at local restaurants. In particular, they have this "Lets do lunch" deal where you can get nice lunchtime meals at fancy restaurants for $35, and I usually like to get some colleagues together to hit up some of the restaurants. Unfortunately, they never have a map visualizing the locations of all the restaurants, and it's hard to find the places near my work.

So, every year, I make a map of the restaurants. Per request of @MorselsMusings, I've also made maps of the Cocktails, High Tea and Sugar Hit deals. The deals last until the end of October, so check them out now while you have time!

If you're a developer and wondering how I whipped these together, here's the short version: 1) I used Dapper to get a CSV of the name, description, and link from the main page, 2) I converted those into a Google spreadsheet and used the importXML function to get the address from the individual restaurant page, 3) I published the sheets and got the latitude/longitude coordinates using my Spreadsheets Geocoding tool, 4) I made the maps using my Spreadsheets Maps API wizard. It is a bit of a process, but as I spend half my life doing this sort of thing, it doesn't actually take very long (< hour).

Happy fooding!

lscache: A localStorage-based, memcache-inspired library

Over the past few years, I've developed a fair few Python App Engine apps, and I've come to have a huge admiration for memcache. The memcache API is incredibly simple, but at the same time, it's a very powerful way of making my apps scale better with increased user demand. The first time I wrote an App Engine app, it went over quota in the first 6 hours. After adding memcache support in, it never went over 1% of quota ever.

When I'm writing client-side apps, I find myself yearning for something like memcache to reduce my number of asynchronous server requests. HTML5 does offer the localStorage API for setting and getting key/value strings, but that API has no notion of an expiration date. It's great to be able to store copies of my data locally, but most of the time, I also want to be able to expire that data after some amount of time (5mins, 1hour, 1 day). So, I wrote a simple library called "lscache" that wraps on top of localStorage, but adds on the notion of expiration. Here's what it looks like to set and get some data:

lscache.set('somedata', {'name': 'Pamela'}, 60);
if (lscache.get('somedata')) {
  console.log(lscache.get('somedata').name);
}

The library lets you store pretty much anything, like a string, number, or object. The localStorage API only stores strings itself (though according to the spec, that should change soon), but the library uses JSON.parse and JSON.stringify to try to store non-string objects. There are some objects that can't be stringified, like the Document object in an XMLHttpRequest, in which case you need to convert them to a simple JSON object yourself first.

The library stores the expiration time in a separate key, and when you try to retrieve a key, it will only return it if the current time is before the expiration time (and will remove it otherwise). It calculates the time using the JavaScript Date object, so it is subject to error if the user futzes with their clock - but if they do, it just means the objects will be cached for a bit less or a bit longer than expected, and the world probably won't fall over. And those silly people should stop futzing with their clock. :)

If the user's browser doesn't support localStorage, the library just won't store anything and will return null when trying to retrieve objects. This works wonderfully with the memcache style of coding, where you never assume that anything is stored, and always fall back to re-retrieving that data if it's not stored. People with "older" browsers will simply get a not-as-speedy performance.

I've pushed the lscache code to a github repo (my first!), and also published a demo of the functionality. I originally wrote lscache to speed up the performance of the XMLHttpRequests in a Chrome extension popup, but as that extension isn't published yet, I've also incorporated it into my public RageTube mashup to cache the JSON results of the Dapper and Youtube APIs.

Check out the library, and if you have any suggestions for improvements, feel free to fork. :)

SydJS: JavaScript Libraries Panel Roundup

At last night's monthly meeting of Sydney JavaScript developers, the organizers tried a different format: the panel. I will admit I was a little wary of the format choice, as I've heard a lot of commentary about panels not being so great (and I've attended SXSW, which seemed to re-inforce that) -- but I think it turned out quite well. The panel had a range of JS developers that represented different libraries, and articulated their differences in opinion in constructive ways.

The panelists were:

  • James McGill, Google, Maps API
  • Dan Nadasi, Google, Closure (@DanielNadasi)
  • Matthew Sain, Yahoo!7, YUI 2
  • Tom Hughes Croucher, Yahoo!, YUI 3 (@sh1mmer)
  • Dmitry Baranovskiy, Sencha, Raphael (@DmitryBaranovsk)
  • Evan Trimboli, Sencha, Sencha
  • Julio Ody Cesar, Awesome By Design, jQuery (@julio_ody)
  • Jared Wyles, Atlassian, AUI (@rioter)


(Photo courtesy of halans - Check out his other SydJS pics.)

I typed up notes during the panel on the questions and answers, and am sharing a paraphrased version of those notes here. I didn't type anything word-for-word, and I missed some of the answers, so don't consider this to be 100% faithful. But perhaps it's useful for folks who want to remember what was discussed last night, or see the kind of questions that were asked at the panel.

Q: Can your libraries be loaded asynchronously, and if not, how do you sleep at night?
Tom: YUI was designed with that in mind from the beginning. It's the only way of loading it. James: The Maps API does it automatically - developer only includes the bootstrap and the API pulls the full library in after. All APIs should do it automatically, otherwise developers won't usually bother.

Q: How do you test the Maps API offline?
James: You must mock, and trust that we've designed our API in a way that makes for reliable mocks (we have). Check out the Closure mocking framework, based on EasyMock.

Q: Why doesn't Google use Raphael for SVG?
James: The Maps API only uses paths, and it isn't that hard to get paths working cross-browser. When you use a library, you're putting performance in someone else's hands, and the Maps API can't afford to do that.

Q: What do you think of CoffeeScript?
Julio: CoffeeScript is not equal to JavaScript. Different languages appeal to different people.
Tom: I talked with Douglas Crockford, and he actually likes CoffeeScript, and thinks that maybe we should have done JS that way (if we could have). But now, even if we could reinvent JS, even if we could make it better, it would be a separate language. JS is winning right now, not because it's the best language ever, but because it's the language of the web. So we should stick with JS.
Dmitry: There is real life, and there is play. I like to play- I like canvas- but I don't use canvas in real projects because it doesn't work in IE. It's not real life. It'd be sad if there was a CoffeeScript programmer that did not also know JavaScript.

Q: What's the purpose of the Closure inspector?
Daniel: It is easier to debug your Closure code before it has been obfuscated, and that's what you normally do. Some bugs, however, only occur after obfuscation, and the Closure inspector is useful for debugging the post-obfuscation bugs. Sometimes those happen when using the advanced compiler options with symbol exporting.

Q: How will we develope JS in the future?
Julio: We will see more modularization, where every page has its own miniframework, its own collection of modules used by the page.
Jared: We will see JS in the server more.
Evan: We will also see JS in the desktop.
Dmitry: We will have better IDEs for developing JS.
Tom: We will have better code, and new features of the language (like strict mode) that encourage developers to write better code.

Q: Do you support touch events in your frameworks?
All: Yes.
James: Currently, it's not too difficult to support touch events since the mobile browsers all followed Apple's lead in implementing them and the API is the same. But, in 8 days, Microsoft is going to release Windows Phone 7, which bundles a ported version of the IE7 browser, and we don't have any idea what touch events will look like. Mobile development is going to get significantly more interesting; we're going from a 1 browser field to a 2 browser field.

Q: What place do UI frameworks like Yuki and ExtJS have - should we be using them, and are they accessible?
Tom: ( Re accessibility) Frameworks should use ARIA for accessibility, which marks up the page to give screen readers additional info. The fact that most screen readers don't support JS is a technology failure. There's nothing inherent about JS which means it can't be supported. Current accessibility guidelines put the onus on the screen readers. Modern readers like Jaws 7 do support JS. Pick a framework that supports ARIA.
Tom: (Re frameworks) Rebecca Murphy has written recently about the problems with using jQuery for enterprise sites. With libraries lik jQuery and Prototype, people tend to write DOM-focused code. With libraries YUI and Closure, people write component-based code. jQuery has done a good job of getting people started with JS and making it easy to get your site going, but when you're building full apps, the component-based architecture with its inheritance and modules works better. DOM-style architecture gets ugly fast.
Nadasi: Part of the reason we use component-based approach in Closure is because it works better at scale. In base.js, goog.inherits lets classes inherit from other classes easier. The ability to inherit is better for distributed development, for having many developers working off the same codebase.

Q: I come from a Java/C++ background, where tooling support is important. In moving to JS, I'm missing the tools. A lot of the times, I feel like I'm guessing-and-checking. Are you guys keeping secrets from us, or is that the way it is?

Julio: I work with Ruby as much as JS, and it has really nice testing tools - the best of any language I've seen. I hope JS will have similar tools somedays.
James: Maybe you should check out Closure compiler. It lets you enforce static typing, which is like testing for a dynamic language - it will catch half the errors. But, in defense of JS, what other development environment lets you inspect your UI, change your UI on the fly, and reflect the changes back?
Dan: Keep in mind that the cost of implementing static typing is that you lose dynamic typing. Static typing can be great for development at scale, but it deprives you of some flexibility.
Tom: I recommend SpeedTracer from Google for performance checking, Firebug for Firefox, and Visual Studio Debugger for IE.
Buchanan (audience): We would love more tools for JS, but on the other hand, I've never lost a day of development due to my IDE refusing to work.

Q: Do we need libraries now? Browsers are better, JS is easier cross-browser.
Julio: We're not at the stage yet where cross-browser isn't an issue.
James: When you're getting started with building an app, you can use libraries to take care of the hard parts for you. But once you start getting significant users, like we did with the Maps API, you'll understand their specific usage patterns, and you will find it better to write your own code that optimizes for their use case.

Q: What was the design problem you were trying to fix when building your library?
Matthew: We designed YUI for the frontpage of yahoo.com, and then decided to open-source it for others to use.
Dmitry: IE has a problem with speed, it's very slow, and it has a problem with DOM, it's just fucked up.

Q: Should we use jslint?
All: Yes.

Q: What would you do better if you were starting over?
Dan: With Closure, we would design API from the start instead of adding bits and pieces. The API is surprisingly consistent, likely due to having the same 2 code reviewers throughout, but it could be better. The documentation is shit. Michael Bolin's book fills in the gaps, but you shouldn't need to read a book to understand how to use it.
James: I would design the code to make it testable first, because it's hard to go back and make something testable. If I really wanted to hate my life, I would code everything in IE6 and get it working reasonable there, and *then* see how awesome it performed in other browsers.
Evan: Augmenting the prototype of basic objects (Array, String, etc) was not a good idea.

Thursday, October 7, 2010

Generating Slides from a Spreadsheet

Lately, I've been playing around with the HTML5 slides deck from slides.html5rocks.com. A few months ago at the GTUG campout, I hacked together an App Engine app for generating HTML5 slide decks. Last week, in preparation for my GDD Tokyo talk on Google's JavaScript APIs, I wrote a client-side mashup that generates an HTML5 slide deck based on data in a published Google spreadsheet, and used it both as my actual slides and as a demo for the talk.


There are some definite benefits to writing your slide content in a spreadsheet:

  • You can look at the revision history of just your content (instead of a confusing mix of code and content)
  • You can share the spreadsheet with other people, and collaborate on the content with them.
  • You can print the content for easy studying.
  • You can create alternate views of the same content, like differently themed slide viewers or slide viewers that show additional columns of information.

Of course, there are disadvantages as well - primarily the fact that Google Spreadsheets wasn't designed as a content management system, and it isn't terribly easy to author multi-line content in the cells.

In case any of you do want to try using a spreadsheet to build your slides, I've made a generalized version of the viewer that will work with any public spreadsheet.

To get started, create a spreadsheet with three columns, 'type', 'title', and 'content'. The type can either be 'intro', 'normal', or 'section', and the title and content can be any text (inc. HTML tags). For example, check out this sample spreadsheet.

Next, publish that spreadsheet using the "Share -> Publish" menu in Google Spreadsheets, grab the key from the URL in the browser, and pass that as the key parameter to the generic slides viewer. For example, here's the URL for the sample spreadsheet:
http://imagine-it.org/sslides/slideshow.html?key=tp1JIiBR7gyKgOKUMDk4d_g

As a bonus feature, you can also include a 'narration' column in your spreadsheet, and display that narration below the slides by passing 'narration=on' to the viewer. For example, here's the Japan talk with narration on:
http://imagine-it.org/sslides/slideshow.html?key=0Ah0xU81penP1dDZWcmxjUjdhOTMyOGxxWXAxMnpKUWc&narration=on

As usual, I will keep experimenting with HTML-based slide viewers until I find the holy grail, so stay tuned! :)

Tuesday, October 5, 2010

History of Client-side Web API Technology

As part of GDD Tokyo last week, I gave a talk on Google's "client-side Web APIs". That phrase is a bit of a mouthful (and not a common one), so here's what I mean by it: An "API" is a way for one piece of software to interact with another piece of software, a "Web API" is a way for one website to interact with another website, and a "client-side Web API" allows a website to interact with another website purely using client-side technology. For example, Google's client-side Web APIs include the Charts API (image-based), the AJAX Search APIs (JavaScript-based) and the Gadgets API (iframe-based).

Both because I needed a way to explain the underlying technology behind our APIs and because I found it interesting, I started that talk with an intro to the history of client-side Web API technology -- the series of milestones in the evolution of the web that brought us to where we are today.

I was just a wee lass when the web was young, so I've cobbled this history together by reading old blog posts, mailing lists, tutorials, and press releases. (And yes, it is a bit Google-biased, since I have more access/knowledge of our history than others). I'm posting it here in hopes that others will learn from it and others will teach me more about the early web. Please let me know in the comments if you have corrections or additions.

It all started with...

1990: HTML

In 1990, Tim Berners-Lee created the first prototype web browser and HTML page, and you can actually still view that page today.

At this point, HTML was capable only of text and links, so we could link to other server's data, but we had no way of including it in our own page.

1993: IMG

3 years later, Marc Andreesen was working on the NCSA Mosaic Browser and realized they wanted a way to include images on webpages - so he proposed the IMG tag, implemented it, shipped the browser, and it's stayed to this day. That is a typical story for how HTML gains a new tag - someone needs it and implements it, others copy it, and eventually its considered part of the standard.

The IMG tag could point to image resources on external servers anywhere on the web, so it was actually the first way you could bring data from other servers onto your page, though the data had to be in image form.

Perhaps the first commonplace use of the IMG tag as an API of sorts was for "hit counters." People would put hit counters on their sites to track visitors, and each counter was actually just an IMG tag pointing at a server, passing in an ID parameter.

1995: JavaScript

In 1995, Netscape and Sun teamed together to introduce JavaScript, a language they predicted would transform the web. At the time they introduced it, JavaScript could only really programmatically do the things you could already do in HTML - like programmatically creating IMG tags -- but it was an important step towards making client-side APIs more possible.

1996: IFRAME

In 1996, Microsoft introduced the IFRAME element. The IFRAME element could embed another webpage on your page, which is one way of bringing in another server in a simplistic manner. It could also be hacked to bring data asynchronously into a page from the same server, similar to what XMLHR would make possible later.

1996: Flash

In 1996, Macromedia launched the Flash Player plugin. The EMBED or OBJECT tags could now be used to embed a SWF file from anywhere on the web. Flash embeds meant we could embed something more interactive than just an image, like a game or animation.

1999: XMLHttpRequest

In 1999, Microsoft innovated again by introducing the XMLHttpRequest object in JavaScript. XMLHttpRequest could make a request to your server, get data back from it, process the data, and render it into the page however it liked. By default, you could only bring data in from your own server using XMLHttpRequest. With particular security settings in IE only, you could actually bring data in from other servers.

XMLHttpRequest was important in getting people to think about getting data into webpages, and making them consider the possibility of getting data from servers other than their own.

2004-5: GMail, Google Maps

In 2004, Google launched GMail, the first popular web application that relied upon XMLHttpRequest/IFRAME for asynchronous data retrieval from the server, and really showed off what was possible with those technologies. In 2005, Google launched Google Maps, which used the same technology to transform online maps into an interactive experience.

Feb. 2005: "AJAX"

Around the same time, Jesse James Garrett coined the term "AJAX" to describe the new GMail-style of applications which fetch data asynchronously using XMLHR and reduce the time users spend waiting on page loads. After he coined the term and popular JS libraries built in support for XMLHR, it quickly rose in popularity amongst web developers as the new, right way to build web applications.

We were still limited to using AJAX to just getting data from our own domain, however.

June 2005: Google Maps API

A few months later, Google launched version 1 of the Google Maps API, the first JavaScript API by Google and also one of the first JS APIs by a big company. This first API was really a JavaScript library that would dynamically create IMGs for each map tile, position them with CSS, and make it easy to add control DIVs and marker IMGs. Developers would typically use the Maps API in conjunction with AJAX requests to get map data from databases on their server.

Dec. 2005: "JSONP"

In December 2005, Bob Ippolitto wrote a blog post describing a technique he named "JSONP", which used ("hacked") the SCRIPT tag to asynchronously bring data in from other servers.

Finally, with JSONP, we had a way to bring data in from another server without using a server ourselves - as long as that server provides JSONP-compatible output.

May 2006: Google AJAX Search API

Next year, Google launched the Google AJAX Search API, which let you embed a Google search box on your site and display the results there, without ever having to leave the site. Since this API needed to get data from Google servers while being used on any domain, it used JSONP behind the scenes and wrapped it up in some JavaScript functions.

This was one of the first APIs that used JSONP to let developers get data from other servers and include it on their site, and paved the way for others.

And that bring us to...

Present-day: Client-side Web API Technology

Thanks to the contributions to HTML/JS over the years, we now have these technologies to power client-side Web APIs:

  • Images
  • Iframes
  • JavaScript
  • JSONP
  • Flash

Note: It has been pointed out in comments that other plugin technologies are also important in this history, and used as or inside APIs, like Java Applets and Silverlight. ActiveX scripting also has a part. We do not use these particular technologies in Google client-side Web APIs, but I will look into their history and update this when I have the chance.

Wednesday, September 22, 2010

RageTube: Using APIs to Bring Music Video Playlists Online

As I've mentioned in other posts, I have a thing for music videos. Whenever I visit a new foreign country, I try to find the local music videos channel and learn about their culture entirely through hours glued to the channel (it kind of works!). In Australia, I eventually discovered Rage, a 6-hour long stretch of music videos that plays every Friday night, Saturday morning, and Saturday night (midnight to 6am!). I did my best to reorganize my life so that I could catch atleast one of the 6-hour stretches each weekend, but alas, I eventually acquired hobbies and the semblance of a social life. At the same time, I've been increasingly disappointed with my worktime music options here — I can't use Pandora (US only), can't use Spotify (Europe only), and have had mixed success with Grooveshark. So, I decided to kill two birds with one stone, and make a mashup that'd let me watch Rage during the day in the form of Youtube music videos: RageTube!

To use RageTube, you provide a URL to any playlist from the Rage archives and it will take care of finding music videos on Youtube for all of the songs and playing through the list one at-a-time. You can skip through songs you don't like with the "Next" button, or you can click ahead or back to videos that you want to watch immediately. Plus, if you want to remember what songs you liked (or hated!), you can click the "Yay", "Meh", or "Nay" ratings widget and see your rating displayed in the playlist. That makes it even easier to find favorite videos to skip to later.

I wrote RageTube on a Monday morning and after sharing it on Twitter, I've discovered that there are quite a few Rage fans that are happy to have a way to enjoy Rage at work, including a few of my Australian colleagues abroad that never get the chance to watch Rage on TV anymore. I've added a few features since the initial version, like the ratings widgets, share-by-URL, and support for older playlists, but I still have much room to improve it, like by adding a playlist picker and playlist ratings.

Now, I have to admit I didn't write RageTube just because I needed a better music option at work — I also plan to use it as a demo in my upcoming GDD Tokyo Talk to showcase how much is possible with just the Google JavaScript APIs. The entire mashup is client-side - one HTML and 2 JS files - and it relies on 3 different JS APIs as well as some HTML5 functionality.

For those interested, here's how it works behind the scenes:

  • When you enter a URL, I send that URL through open.dapper.net, service that screen-scrapes websites and gives you the desired nodes in the format of your choice. I fetch the JSON format using a lightweight JSONP library. I store the song and title in an internal JS array and render them out as a scrollable list.
    var dappUrl = 'http://open.dapper.net/transform.php';
    var params = {'dappName': dappName, 'transformer': 'JSON', 'applyToUrl': playlistUrl}
    JSONP.get(dappUrl, params, function(json) {
      ....
    });
    
  • I then try to play the first song, and when I discover that there's no youtube ID stored in in the internal array (as will be the case for the first song), I use the Youtube JSON-C API to find the first matching video.
    var query = song.artist + ' ' + song.title;
    var searchUrl = 'http://gdata.youtube.com/feeds/api/videos';
    var params = {'v': '2',  'alt': 'jsonc',  'q': query}
    JSONP.get(searchUrl, params, function(json) {
      song.results = json.data.items;
      song.youtubeId = json.data.items[0].id;
      ...
    });
    
  • The first time that I play a video, I embed the Youtube player SWF in the page using SWFObject, specifying a few parameters that will make it possible for me to use JavaScript to interact with that player later using the Youtube Player API.
    var params = {allowScriptAccess: 'always', allowFullScreen: 'true'};
    var atts = {id: 'youtubeplayer'};
    swfobject.embedSWF('http://www.youtube.com/v/' + song.youtubeId +
     '?autoplay=1&fs=1&enablejsapi=1&playerapiid=ytplayer',
     'videoBlock', '425', '356', '8', null, null, params, atts);
    
  • Once I get programmatic access to the player, I register a callback so that I know whenever the player changes status (started/paused/ended).
    function onYouTubePlayerReady(playerId) {
      youtubePlayer = document.getElementById(playerId);
      youtubePlayer.addEventListener('onStateChange', 'onYouTubePlayerStateChange');
    }
    
  • When the video ends, I play the next song. (I've already got the Youtube ID for the next song because I always search for the next video whenever I start playing a video.)
    function onYouTubePlayerStateChange(newState) {
      if (newState == 0) {
        currentSong++;
        playNext();
      }
    }
    
  • To let users rate each video, I use a localStorage-based doodad that I wrote for another music video mashup.
    likerCol.appendChild(LIKER.createLikerMini(song.id));
    ...
    likerBlock.appendChild(LIKER.createLiker(song.id));
    

Happy RageTube'ing!

Monday, September 20, 2010

Embedding Feed Gadgets in Google Sites

Today, I spent a few hours re-organizing waveprotocol.org to be easier to navigate. As part of that re-org, I wanted to also make it clear to people visiting the site that the protocol is in active development by showing them the activity from our discussion group and code repository. Since both the group and project offer ATOM feeds, I figured I could just embed a gadget to show the latest posts from the feeds.
After spending a good half hour trying to find a gadget that would do just that, I gave up and wrote one myself. And now you can use the gadget yourself if you're in a similar situation. :)

Using the Gadget

Here's how you can actually embed the gadget on your site:

  1. Put the target page in editing mode. Open the "Insert" menu and select the final "More gadgets" option.
  2. Select "Add gadget by URL" in the sidebar of the dialog.
  3. Enter this URL in the input box:
    http://pamelafox-samplecode.googlecode.com/svn/trunk/feedgadget/feedgadget.xml
  4. Enter the URL to your feed in the "Feed" input box in "Setup your gadget". The feed must be either an ATOM or RSS feed.
  5. Customize the width, height, and title as desired.

Tip: If you want to embed multiple gadgets next to eachother, change your page layout to a multi-column view and stick a gadget in each column.

Finding Feeds

Here are some tips for finding feed URLs for various Google properties:

About the Gadget

For developers, here's some information about how the gadget works.

The gadget uses the AJAX Feeds API and the google.feeds.FeedControl class, and of course, it uses the gadgets API. It's actually a nice example of how to write a simple gadget that uses a Google API and user preferences:

<Module>
    <ModulePrefs title="Feed Control" height="400"/>
    <UserPref name="feedurl" display_name="Feed" default_value="https://groups.google.com/group/wave-protocol/feed/atom_v1_0_msgs.xml"/>
     <Content type="html"><![CDATA[ 
     <div id="feed" style="font-size: small;"></div>
     <script type="text/javascript" src="http://www.google.com/jsapi"></script>
     <script type="text/javascript">
     google.load("feeds", "1");
   
function initialize() { var prefs = new gadgets.Prefs(); var feedControl = new google.feeds.FeedControl(); feedControl.addFeed(prefs.getString("feedurl"), ""); feedControl.draw(document.getElementById("feed"), {}); } google.setOnLoadCallback(initialize); </script> ]]> </Content> </Module>

Wednesday, September 15, 2010

Using OAuth with Spreadsheets API on Django/AppEngine

In a previous blog post, I showed how to import a published spreadsheet feed into an App Engine datastore by just grabbing the JSON. For another project I'm working on, I need to be able to import a *private* spreadsheet into an App Engine datastore. Because of the need to authenticate the user (via the multiple steps of the ever-so-elegant OAuth dance), this importing requires much more finagling.

With the help of my trusty colleague Vic Fryzel, I've put together a set of Django views that use the Python GData Client Library and should run on both App Engine Django, and with some modification for token storage, other Django stacks. I'll walkthrough the views here.


There are four URL handlers required, two for token requests, one for actually importing the spreadsheet, and one to manage the flow:

urlpatterns = patterns('',  
  (r'^get_oauth_token', 'importer.views.get_oauth_token'),
  (r'^get_access_token', 'importer.views.get_access_token'),
  (r'^import_spreadsheet', 'importer.views.import_spreadsheet'),
  (r'^$', 'importer.views.main_page'),
) 

When the user visits the main page, they are asked to login so that the app can remember their auth tokens, and if they are already logged in, they are redirected to the first token handler:

def main_page(request):  
  if not users.get_current_user():
    return HttpResponseRedirect(users.create_login_url(request.build_absolute_uri()))

  access_token = gdata.gauth.AeLoad(ACCESS_TOKEN)
  if not isinstance(access_token, gdata.gauth.OAuthHmacToken):
    return HttpResponseRedirect('/importer/get_oauth_token') 

In this first step of the OAuth dance, the app requests an oauth token for the specified scope (Spreadsheets), key/secret (anonymous, as I haven't registered my app), and a callback URL to my app. It saves that token to the App Engine datastore using a convenience function in the client library. Then it redirects the user to the authorization URL for that token, and the user is presented with the "Grant access" screen.

def get_oauth_token(request):
    oauth_callback_url = 'http://%s:%s/importer/get_access_token' %
        (request.META.get('SERVER_NAME'), request.META.get('SERVER_PORT'))
    request_token = client.GetOAuthToken(SCOPES, oauth_callback_url,
        CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
    gdata.gauth.AeSave(request_token, REQUEST_TOKEN)

    authorization_url = request_token.generate_authorization_url()
   return HttpResponseRedirect(authorization_url) 

When the user returns from the authorization screen to the callback handler, the app retrieves the original token, asks Google to upgrade that to an access token, and saves the access token to the App Engine datastore again.

def get_access_token(request):  
  saved_request_token = gdata.gauth.AeLoad(REQUEST_TOKEN)
  request_token = gdata.gauth.AuthorizeRequestToken(saved_request_token,
      request.build_absolute_uri())
  access_token = client.GetAccessToken(request_token)
  gdata.gauth.AeSave(access_token, ACCESS_TOKEN)
  return HttpResponseRedirect('/importer/') 

The user is then redirected to the main page, and since it sees that there is now an access token for the user, it shows the user an input box for providing a spreadsheets URL. When it knows the spreadsheets URL, it retrieves the list feed for that spreadsheet and saves each row as an entity in the datastore.

def import_spreadsheet(request): 
  import re
  import models

  client.auth_token = gdata.gauth.AeLoad(ACCESS_TOKEN)

  spreadsheet = request.GET.get('spreadsheet')
  if spreadsheet.find('google.com') > -1:
    spreadsheet_key = re.search('key=([^(?|&)]*)', spreadsheet).group(1)
  else:
    spreadsheet_key = spreadsheet
  worksheet_id = 'od6'
  list_feed = 'https://spreadsheets.google.com/feeds/list/%s/%s/private/values' %
      (spreadsheet_key, worksheet_id)
  feed = client.get_feed(list_feed,
                         desired_class=gdata.spreadsheets.data.ListsFeed)
  for row in feed.entry:
    firstname = row.get_value('firstname')
    lastname = row.get_value('lastname')
    email = row.get_value('email')
    person = models.Person(firstname=firstname, lastname=lastname, email=email)
    person.save()

  return HttpResponse('Saved %s rows' % str(len(feed.entry))) 

Using that code, the end result is going from this spreadsheet...

... to these datastore entities:

To see the full code (with inline comments), check it out from my repository or download the zip.

For simplicity's sake, this sample shows the simplest possible import. In my actual project, I am also creating an entity that represents the entire spreadsheet, and the entity for each row refer to that entity. In addition, I have code to convert from the spreadsheets strings to other model types like dates.

Hopefully this project can serve as a basis for other developers using spreadsheets as an import source for their apps. Enjoy!

Sunday, September 12, 2010

Porting from an App Engine RequestHandler to a Django View

For whatever reason, I've found myself porting Python App Engine apps over from App Engine's "django-esque" webapp framework to true django 1.0 with views.py, urls.py, and the like.

Besides learning about how urls.py and views.py work, I had to do some research to figure out how some of the webapp-isms translated to django-isms, so I thought I'd post my findings here in a table comparing the two:

Webapp Django
author = self.request.get('author') 
What you use depends on how specific you want to be about where parameter was passed:
author = request.GET.get('author') 
author = request.POST.get('author')
author = request.REQUEST.get('author')
class MyRequestHandler(RequestHandler):
  def get(self):
    # Do stuff
  def post(self):
    # Do other stuff
def handle_request(request):
  if request.method == 'GET':
    # do stuff
  elif request.method == 'POST':
    # do other stuff
host = self.request.host
host = request.get_host()
url = self.request.uri
url = request.build_absolute_uri(request.path)
query_string = request.query_string
query_string = request.META['QUERY_STRING']
self.error(500)
return HttpResponse(status=500)
or
from django.http import HttpResponseServerError
return HttpResponseServerError()
self.redirect('/gallery')
from django.http import HttpResponseRedirect
return HttpResponseRedirect('/gallery') 
path = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(path, template_values))
from django.shortcuts import render_to_response 
render_to_response('index.html', path)
Or, if for some reason you need the in-between products of that shortcut (like the generated string), you can use the longer version:
from django.template.loader import get_template
from django.template import Context 

t = get_template('index.html') 
html = t.render(Context(template_values))
return HttpResponse(html) 
self.response.headers['Content-Type'] = 'application/atom+xml'
self.response.out.write(xml_string)
return HttpResponse(xml, mimetype='application/atom+xml')

If you have suggestions for better "transformations", please let me know. I'm fairly new to Django and am happy to learn more about the right way to do things.

Thursday, September 9, 2010

Teaching & Using Google Data APIs @ USYD

As I've posted on other blogs, I always love the idea of teaching Web APIs to university students and finding ways to use them in class assignments. Today, I visited the University of Sydney (USYD) and saw multiple ways that they're using APIs in education, and I'd like to share them here.

First, I gave a guest talk to an Object-Oriented Frameworks class on "Google Data APIs & the Google Docs API". For the next two months, the students in that class will be working on group projects with an education theme and combining the technologies they've studied, and I hope to show up on their demo day and see some cool examples of API usage.

After the talk, I went to lunch with the professor's research team, and watched videos about some really neat education & API related projects they're working on: iWrite, a system for submitting assignments as a doc & getting instructor feedback, and Glosser, a system for automatically creating questions about papers to make students think more about them; and for analyzing contributions of group members to a paper. Both of these use Google Data APIs in conjunction with the students' USYD Google Apps accounts, and are great examples of how research, APIs, and Google products can interact.

Though the Google Data APIs are not as "sexy" as our other APIs, they are incredibly useful and great teaching tools since they span across many Google products and build on existing web technologies like XML, ATOM, OAuth, and the HTTP protocol. They're also particularly useful for students at Google Apps enabled universities, since they can be used to create applications for accessing and modifying data in the Google Apps suite that they use daily.

When I was a TA at my old university (USC), we used Google docs and spreadsheets in our classes for keeping tabs on group work, and I used the APIs to automate processes for the professor. That was right before USC actually became a Google Apps domain -- if I was there as a TA or student now, I'd probably spend all day hacking on Google data APIs and App Engine to make cool apps for classes and clubs, and trying to get my classmates to join the fun.

Anyway, it was great to see how USYD is using our APIs across both their classes and research. Let me know if your university is up to anything similar! :)