Category Archives: Computing

Why Having Different Passwords Matter.

317657937_4595dfcff6For years, myself and many people I know used a fairly simple password strength hierarchy.  You had a simple password for all those forum sites, and unimportant websites.  You had a more complex password for sites that probably involved transactions, maybe your utility company, or Amazon.  Finally, you had a top-level password reserved for bank or credit card website access, which probably exceeded 12 characters and had punctuation and mixed-case letters.  The rationale was that if someone hacked one of the lower-level passwords, they would only have access to some special interest group forum, and who cared about that, you could always reset the password.

But here’s why we are wrong.  You see, most websites use an email / password combination, and what works on one site, works on another.  You may think that even if they get my password by hacking one site, how are they going to guess which other sites I’m a member of.  You think this because you’re imagining that there’s some bored hacker teenager staying up all night entering your email / password combination into a couple of key sites.  But he doesn’t exist.  Your details are fed into a complex network of computers that are trying your combination into thousands, if not millions of sites.  And, when a match is found, your details are stored in a database and sold to other hackers and organizations.  So that little forum that was run by a part-time enthusiast, who didn’t have time to implement complex security on the storage of passwords to his forum becomes ground zero for a sequence of attempted logins on websites around the world.

This is why you should have a different password for every site you visit or require registration on.

I’ll repeat that.  This is why you should have a different password for every site you visit or require registration on.  It’s important because, just like dominoes, once one site is compromised, every other site will fall.

Arguably, the best solution is to have password manager take care of all your passwords for you.  I’ve been recommended Dashlane by many people who’s opinions I respect. But, there are still aspects to this that cause concern. You are putting all your eggs in one basket with password managers. If someone cracks that nut, they get everything. I know it’s unlikely to happen, but it’s still a risk, especially as password managers now work in the cloud, enabling you to have your password management work across multiple devices, and we all know how secure the cloud is proving to be.

No.  I think the solution for the more paranoid amongst us is to maintain a system for having unique passwords for every site, but create a method for minimizing the amount of information to have to remember.

So how do you go about devising a scheme?  For this we need to understand a little about how passwords are stored in most online databases, which control your access rights to websites.  Typically, your plain-text password is converted into an encrypted hash.  In the past, this might have been MD5, but that’s been proven to be not as secure as previously thought.  It’s successor SHA1 has been replaced with the even more secure SHA2.  Both the SHA algorithms were produced by the NSA, which might give you some cause for concern, but peer review seems to indicate that they are secure.  What a hash algorithm does is convert your password into a long, unique string of ASCII characters.  For every password you use, a different hash is produced.  Now, there is a small problem with this.  If you acquire the plain-text password and its corresponding hash, you can search for that hash in another website’s database and be able to decode the hash.  This is because the hash for a specific password will always be the same.  In fact, there are many websites devoted to storing MD5 hashes, which you can look up a hash and get its plan-text string. You’d be surprised how many MD5 hashes are on these sites.  The recommended method for obscuring these hashes is a technique called “salting”.  Salting adds an additional string to your password before converting it to a hash, and storing it in the database.  This means that the hash produced is doubly obfuscated, and the same hash value stored in two separate databases will be different even though they represent the same plain-text password.

Using the method of salting as inspiration, it is possible to design a personal password system that is easy to remember but very difficult to crack.

Here’s how you start.  Pick a phrase or collection of words (minimum of 3), use title case, that is make the first letter of each word upper-case.  Add a couple of numbers, and a special character (something like an exclamation point, hyphen, or curly bracket.  You should end up with something like this: PurpleFriendlyWombat37!  You can decide how to mix up the words, numbers, and punctuation yourself.  Better yet, add punctuation to the whole phrase.  Whatever you decide on, just make sure you can remember it.  This is actually a lot easier than you think.  Now here’s the twist, the salt so as to speak.  Now you’re going to add the name of the website to your password root.  Let’s say it’s America Online, or “AOL”.  For AOL your password would be “PurpleFriendlyWombat37!AOL“, or to mix it up further perhaps “PurpleAOLFriendlyWombat37!”.  Whatever method you decide for adding to your password, you will apply this method to all passwords for all sites.  Therefore, if you chose the second method indicated above, your password for citibank, would be “PurplecitibankFriendlyWombat37!”.

That’s it.  You now only need to remember one root password, which is most likely already more complex and secure than most of the passwords you’re already using, but with the addition of the website, or organization name you’ve got a different password for every site.

Annoyingly, a few organizations require password changes on a regular basis (30, 60, 90 days, or so).  This is irritating for everyone concerned, from the users, to the support desks that constantly have to deal with forgotten passwords.  This method doesn’t handle situations like this.  You’re on your own there.

What’s Wrong? – Part 1

What’s wrong?  I’ll tell you what’s wrong.

Here I am in New York City.  I pull up the Apple Map on my iPhone running iOS8.  I type the following string to search for an address “71 West Broa“.  This offers two suggestions, one in England the other in Australia.  I continue typing making the string “71 West Broadway“.  Up comes an option in New York City, presumably the one I’m after.  I select that suggestion only to receive the response “No Results Found, OK”. OK?  No, not OK.

I think a bit and edit the string to be “71 W Broadway”, I select the suggestion offered in New York City, and it finds it.

What’s wrong?  –  That’s what’s wrong!

 

Windows 8 – Lessons Learned

I got a great deal on a Samsung Ultrabook.  The person I was buying it for was most insistent that it didn’t have a touchscreen, which was a feature surprisingly hard to find.  It also came with Windows 8 pre-installed.  I reasoned that it couldn’t be that different.

Lessons Learned:

If you get a laptop with Windows 8 the first thing you need to do is switch the thing on and see what all the fuss is about.  This means you have to register it with email address(es), your phone number(?), and a password.  That password has to be all mixed upper/lower case, and can’t contain any part of your email address.  No problem, perfectly understandable precautions.

The first thing you’ll see is the new “tiled” layout screen, which disappears off to the right of the screen, containing a whole bunch of apps that Samsung has helpfully installed for you.  Most of which you won’t want.  How do you scroll right?  Well, since you don’t have a touch screen, this is way more awkward that it should be, but with a little trial and error, and by hooking a mouse up, instead of trying to use that awful touchpad; we find that the scroll wheel does what it should and scrolls nicely.  We’ll come back and fix this tile interface later on.

The very first thing you should do is get online and download about 87 Windows 8.0 updates.  If you try to do this without first responding to the confirmation email from Microsoft, which is sent after you complete your registration, the progress bar will quite happily sit there flashing away FOR EVER.

So, after completing the registration process, make sure you respond to the email from Microsoft.

If you have decided to upgrade to Windows 8.1 (advised), follow the instructions and go to the store.  You will NOT be able to find the upgrade anywhere in the store, or online at www.microsoft.com.  This is because you have to install all those updates (see above) before the store will allow you access to the 8.1 upgrade.

So, start the updates.  The progress bar will sit there flashing away for about 5 minutes, making you wonder what’s wrong now, before beginning to show some activity.

Go for a long walk.  Those updates are going to take a while.

After updating, rebooting, and updating some more, you are now ready to upgrade to Windows 8.1.  Go to the store and you’ll find the option to upgrade visible.

Begin the download of the 2.9GB!!!!!!!! upgrade package.  Words fail me.

Go for another long walk.  I don’t care how great your broadband connection is, you’re going to hit Microsoft’s servers, and they sure as hell aren’t going to be moving like greased lightning.

Finally, after the obligatory reboot, and more acceptance of terms and conditions, you’re back at that tiled page again.  Since you didn’t spend too long looking at it the last time, it’s hard to tell if anything actually changed.  The “internet” says it changed, and that it’s all for the better, but who knows.

Now I’m no luddite, but when it comes to paradigms for interfaces on a machine to be used as a DESKTOP computer then there’s one thing I like to see and be able to interact with, and that’s a DESKTOP.  So a little research and a speedy download later, although not through the Microsoft “Store“, but their own website (www.classicshell.net), we arrive at the very useful and thoroughly essential “Classic Shell”, coupled with the configuration to boot directly to the desktop, and voila! A functioning ultrabook running Windows 8.1

 

Google Apps API and Ruby – Part 2

My last post dealt with calling Google Apps API methods from a Ruby script.  I had hoped that I’d be able to do everything I needed with RESTful methods, but after a short while it became increasingly difficult, and I had to resign to using the Google API client GEM.  Initially I was resistant to using the pre-packaged solution as I like to know what’s going on under-the-hood when I’m interfacing with something, but it became apparent that the advantages to using the GEM far outweighed the simplicity of RESTful calls, especially with some of the APIs that I was going to be interfacing with.

As it turns out there are some very good reasons to use the google-api-client gem.  Since you can easily inspect the object’s methods and properties you can determine all the methods associated with an API.  For instance, to get the methods available for the version 3 Calendar api, once you’ve created a client object do the following:

api = client.discovered_api('calendar', 'v3')
puts api.methods

Nice right?

Even more useful is the ability to determine the APIs available, and their version numbers:

puts "Title \t ID \t Preferred"
client.discovered_apis.each do |gapi|
  puts "#{gapi.title} \t #{gapi.id} \t #{gapi.preferred}"
end

And that gives you a great start for attacking the Google API documentation.

There are some additional differences for creating a client connection.  You’ll need to have a private key file accessible to your script, on top of the usual server-side configurations.  You’ll also need to check out the advantages of using a “service account client”, which allow you to run API calls as a specific user, which is extremely useful when dealing with the somewhat flawed Google Apps access security model.

RESTful Google APIs, OAuth2, and Ruby

So you’re all excited to find out that the new Google APIs use RESTful methods, GET, PUT, PATCH, DELETE, etc.  And you’ve done some simple tests with calls that only require an inline key.  Everything looks good.  Now you want to do some scripting with Ruby, but all the examples on the Google site use the “google/api_client” gem, and that’s not very RESTful, even if it does bundle the OAuth2 class in it.

So task number one is to deal with OAuth2.  OAuth2 looks like a pain.  You’ve got to register your application, get a client key and secret code, use that to make an HTTP request for a token, and it all looks a little manual. You can read all about it elsewhere, but if you want to start accessing the Google APIs, you’re going to have to deal with it.  A great place to play around with the APIs and how they work in conjunction with OAuth2 is to use another Google resource, the OAuth2 Playground: https://developers.google.com/oauthplayground, this will show you how to define your calls, and shows all the responses.  Very handy for testing things out.

Avoiding the temptation to try to automate the manual authorization step, a little digging around and you can find someone has already done the heavy lifting: http://adayinthepit.com/2012/07/13/google-apis-sinatra-oauth2/. To paraphrase their code into a single ruby file, we get this:

require 'sinatra'
require 'oauth2'
require 'json'

G_API_CLIENT= "{your-client-key}"
G_API_SECRET="{your-secret}"

# Scopes are space separated strings
 SCOPES = ['https://www.googleapis.com/auth/calendar'].join(' ')

def client
 client ||= OAuth2::Client.new(G_API_CLIENT, G_API_SECRET, {
 :site => 'https://accounts.google.com',
 :authorize_url => "/o/oauth2/auth",
 :token_url => "/o/oauth2/token"
 })
 end

get '/' do
 redirect client.auth_code.authorize_url(:redirect_uri => redirect_uri,:scope => SCOPES,:access_type => "offline")
 end

get '/oauth2callback' do
 access_token = client.auth_code.get_token(params[:code], :redirect_uri => redirect_uri)
 puts "Successfully authenticated with the server"

#this is where our API call is going to go
 response = access_token.get('https://www.googleapis.com/calendar/v3/calendars/{your-calendar-id}').parsed
 end

def redirect_uri
 uri = URI.parse(request.url)
 uri.path = '/oauth2callback'
 uri.query = nil
 uri.to_s
 end

In this example the script is going to retrieve the calendar metadata settings for the calendar identified by {your-calendar-id}.  Because we’ve used the “.parsed” method, it will return the metadata as a hash.  You can access the hash by normal methods, ie. response["id"]

A couple of notes here:

  • You’re going to need to register with Google to get your API access.  All of this can be done here: https://code.google.com/apis/console
  • When you create your API access ID you’ll need to define what APIs your client is going to access.  These you define in the SCOPES variable, but you’ll also need to define them in the APIs console (see above).  To find the correct string for the scopes you’ll need to dig around in the API reference for the Google API you want to use.  A good place to start digging is here: https://developers.google.com/apis-explorer/#p/, click on the API and find the link for the “reference”.
  • Because I’m not going to start trying to parse the re-direct, when you run this, you’re going to need to open a browser and point it at the defined sinatra address (normally: http://localhost:4567).  This will is open up a dialog with Google in order to validate your credentials and issue your script with an OAuth2 token, which will then execute the code in the redirect URI (/oauth2callback).

So that’s the way most of the GET calls are going to be handled, but what about POST, PUT, PATCH, and DELETE?  You’re going to need to start passing parameters.  Let’s see an example also using the calendar API, where we create an event in our calendar.

To add a calendar event we need to use a POST method, fortunately the OAuth2 class provides methods for all these calls.

response= access_token.post('https://www.googleapis.com/calendar/v3/calendars/{your-calendar-id}/events') do |request|
 request.headers['Content-Type'] = 'application/json'
 request.body='{"end": {"dateTime": "2013-07-21T10:30:00.0z"},"start": {"dateTime": "2013-07-20T10:30:00.0z"}}'
 end
 # check response.status, works if it's 200
 json = JSON.parse(response.body)
 #access a specific response value
 puts json['id']

Note that we’ve removed the “.parsed” method from the access_token method and added some parameters to the inherited faraday client HTTP object’s request variable.  The addition of “Content-Type” allows us to pass the parameters as json, as well as receive the response in that format.  Again, playing around in the OAuth2 playground (see above) can really help speed up getting your json data formatting squared away.

Google Calendar API – Update

Google updated their calendar API (from version 2 to version 3), and how much simpler it all is.  No more “gdata” objects, the whole thing has been moved to a RESTful model with the data being returned as JSON. Accessing the calendar data couldn’t be simpler using AJAX from a web page.

xmlhttp=new XMLHttpRequest();
xmlhttp.onreadystatechange=function() {
 if (xmlhttp.readyState==4 && xmlhttp.status==200)
 {
 var obj = JSON.parse(xmlhttp.responseText);
 var elem = obj.items;
 elem.forEach(function(entry) {
 console.log(entry.summary);
 });
 }
 }
 xmlhttp.open("GET","https://www.googleapis.com/calendar/v3/calendars/qqqqqqqqqqqqqqqqqqqqqqqqqq%40group.calendar.google.com/events?singleEvents=true&key=XXXXXX-YYYYYYYYYYYYYYY_ZZZZZZZZZZZZZZZ",true);
 xmlhttp.send();

where ‘qqqqqqqqqqqqqqqqqq%40group.calendar.google.com’ is your calendar ID, which can be found in the web interface to your calendar, under “settings”.  The “key” is the authentication key, which you’ll need to setup through the API console, accessible here: https://code.google.com/apis/console.  You can also configure the key so that its use can be restricted by referrer, so that only specific sites can make use of it.

Version Number Madness

I don’t know who started it.  It probably wasn’t Microsoft, but Windows 95 is definitely a bit-player in the confusing story of how software companies mismanage their version numbers.

I used to work for a software company at the end of the eighties.  Early in our product’s development, a methodology for handling the version number was conceived.  It was pretty straightforward, which meant that a version number of 2.4b referred to Version 2 of the product, with minor updates numbering 4, and it was on its third patch.  Customers with support contracts got free upgrades as long as the major version didn’t change.

Microsoft Windows initially adopted a pretty sane version numbering scheme.  Everything was fine up to Windows 3.11, then suddenly we were at Windows 95, followed by Windows 98, a bewildering Windows 98 SE (Second Edition), Windows Millennium Edition (designed to conflict with Windows 2000, its NT cousin?).  What a mess!  What was so great about 1995?  But under the hood, the major version numbers were still ticking over.  Windows 95/98/Me = Version 4, Windows XP = Version 5, Windows Vista = Version 6, and then back to numbers again with Windows 7, and the list is soon to supplemented by Windows 8.  But wait! Under the covers Windows 7 is actually Windows  version 6.1.  That makes no sense.  I mean it really doesn’t.  Apparently the reason for this is to allow software that checks for compatibility to run correctly.  Specifically, software written to run in Vista will run in Windows 7.  This is stupid.  Windows 8 will be version 6.2!  So, Microsoft has decided to follow Sun’s equally nonsensical version naming of Solaris. Solaris’ version numbering was never going to make much sense.  Solaris 1 was really SunOS 4.1.  Not really unusual to re-number with a re-brand. Solaris 2 was SunOS 5.0, and things were OK up to 2.6, which was SunOS 5.6, then it went all wonky.  SunOS 5.7 became Solaris 7.  Sure, a 64-bit OS is a big deal and deserves a major version change, but why not update the SunOS version at the same time?

Don’t even get me started on Pentiums, OS X, or Adobe CS!

In praise of simplicity

In the early nineties, I knew and worked with several Apple Mac users. They would enthuse about how easy and uncomplicated Macs were.  At the time I was building convention exhibits which depended on esoteric input and output devices, the kind of hardware and interfaces that really didn’t mesh well with Macs.  I railed against the “closed” nature of Macs.  That you needed special tools to get in the thing.  That they used non-standard connectors, and that annoying jack-in-a-box ResEdit program that was essential to get the machine to do anything non-standard.  But being a student of good interface design, I recognized that there was a useful logic to dictating the “correct” format of dialog boxes.  It meant that a user’s experience would be consistent across different packages.  This was why non-computer-science users liked these machines, there was a constancy about their interaction.  It made sense.  To contrast with the legendary MS-DOS message “Abort, Retry, Ignore?”, really?  Did anyone ever get something useful, by selecting “Ignore”?  Did anyone ever really code to handle an “Ignore”?  There was a design rule that Mac dialog boxes should always have no more than two options.  Simple, right?  There were exceptions.  Kai Software had a product called Bryce.  Everyone raged about how great the interface was, how innovative, how exciting.  It wasn’t.  It had a hidden menu system, options only revealed themselves when you moused over a part of the screen.  Beautiful?  Maybe.  But ridiculously hard to use.  Some users seemed to revel in its obscure nature, but it failed on a usability level.  Using it felt like you were playing Myst, randomly moving the mouse around to see if it changed icon, or revealed an option.  Fun in a game, utter crap in a software interface.

You see, people like simple stuff.  The number of people who couldn’t program their VCR (video cassette recorder, for you younger readers) is legendary, the flashing “12:00” was always a dead giveaway at your grandparent’s house.  Someone came up with a great idea in TV guides, a number next to the show you wanted to record, which you entered on your remote, and Bingo! the show was programmed.  This was even improved with a bar code, and the remote updated to include a scanner, eliminating the embarrassment of entering the code for Care Bears, and ending up with Showtime After Midnight programming.  Simple is good, it encourages people to use products more, and more use means more money.

Google’s search, when it was first released onto the world was simple.  Just a text box.  They liked it.  Critics praised its simplicity.  Users loved its simplicity.  The page loaded in a flash, you typed in your search, and chances are the thing you were looking for was on the first page of returned results.  All that “complex” search language that you’d learned in AltaVista was unnecessary.  Google just worked.  So why is it all now messed up?  There are slabs of Javascript all over the main page (about 22KB).  You can actually start typing before it’s fully loaded.  As soon as you start typing, the page re-configures itself, the text box moves.  As you move the mouse away from the text, you roll over another “hot” area and half the page explodes with content giving you a preview, only to vanish immediately as you mouse out.  “What the hell was that?  Did you see that?  Did I just have a pop-up?”, you can see the less technology savvy grandparents backing away from the keyboard already.  Oh, I’m sure it’s clever, I’m sure there’s a reason, but listen, you just made it more complicated.  You have just alienated an albeit small group of users, but you made something that didn’t need to be any more than a text box more complicated.  In the ’90s, AltaVista got a “Webby” award for placing an additional search box at the bottom of the results page.  Something so simple and obvious, was truly appreciated as useful.  Google just took their textbox at the bottom of screen away.  Why?  How could something that was so well received when it was implemented that it got an award be deemed no longer necessary?  Was it to fit in with Google’s new look?  Was it a loss of functionality to fit a visual design shift?  Was it elevating form over function?

The advantages of simple interfaces have been demonstrated time and time again.  It might even be accurate to say that Apple’s current status is a direct result of this line of design.  Although, I’d be happy if someone could explain iTunes’ chaotic user interface to me!  Simple is good.  Simple works.  Simple is efficient.  Ask anyone who’s tried to pay their AT&T wireless phone bill from a weak wi-fi connection in Mexico, how much they enjoyed waiting for that Adobe Flash intro to download and play on the opening page, and I think you’ll begin to get the picture. 

You see, everyone’s getting in on the game.  Facebook just announced “Timeline”, a new interface to your page, and what’s this?  A customizable header?  Part of the popularity of Facebook was its simple, clean interface.  Everyone’s page looked pretty much the same.  Now we have what looks like the start of an attempt to make Facebook as customizable as, say MySpace, and that’s a great looking site, right?  I don’t know what the equivalent of the flashing “12:00” on a VCR is for a website, but I do know that less is more when it comes to interface design, and if you were famous for having less and start adding more, you’re probably making a mistake.

Apple Memories

The passing of Steve Jobs reminds me of my first serious project, and by serious I mean that it was complicated and I got paid!

Back in the early eighties I was talking to the design department of a light engineering company that made precision gears and gearboxes. I was just out of high school and pretty cocky.  They showed me their method for visualizing gear tooth and gear tooth space, which was a collection of plastic templates and shapes.  You used these shapes and a ball-point pen to scribble, in the same way that you would use a Spirograph, round and round, rocking the shape around the curved template until the clean space in the middle showed you the shape of the gear teeth.

They explained to me that a gear tooth is composed of about 8 curved surfaces, some of them simple, some of them complex (involute).  I was pretty good at maths and geometry, and had been setting my self programming problems on various micros such as the TRS-80, Video Genie, PET and Sinclairs.  The design department had recently bought an Apple II and in my arrogant enthusiasm I said that if they could describe the geometry, I could write something that would draw these shapes and allow them to be printed out.  The only other option at that time was to use a very expensive early CAD system made by Racal.

It turns out that although there are many curved surfaces on a simple gear, there are only about 4 parameters used to cut them.  The gear inner and outer radius, the number of teeth, and adjustment to control the under, or over-cutting of the tooth space.  Of course, there are many complex gear shapes, but for this project we were only considering a simple gear.  No worms, or helical, or anything exotic.  Armed with an A0-size piece of paper and a drafting table I set about mapping out the various curves and how they all connected when the cutting parameters were defined.  This took much longer than I anticipated, but eventually I convinced myself that I’d got it down.

I started coding it all in Applesoft Basic, which at the time had some advantages over its Microsoft core, including some advanced trig functions and floating point arithmetic.  After a while I was generating screen images of gear teeth, the problems started when I came to figure out how to print the screen.  I remember acquiring some code from a magazine to do screen dumps, but I was already using too much memory and had reached the limits of the massive 96KB of memory.  The solution was to swap a large chunk of the program into the video memory whenever I initiated the print, and then swap it back out once printing was complete, which meant that I had to turn the screen to blank once print was started.

It all worked.  It was slow, taking about 5 or 6 minutes to plot, and was difficult to validate the output since the only other methods were to use the Spirograph or actually cut the gear.  They used it for a few years until they finally invested in a CAD/CAM system, which could also integrate with the computer-controlled lathes on the shop floor.

Drinking from the font of the Google feeds

A week or so ago, I came across a Google “code playground” website (http://code.google.com/apis/ajax/playground/#retrieve_events), which allowed you to interactively “play” with AJAX code that accessed the Google APIs.  There were all kinds of things in there.  You could access maps, blogs, calendars, images, translations services, and so on, and so forth.  It was a lot of fun, and provided a really nice way to quickly investigate the available services and features.

I’m also helping out a musician with a website to promote his services as a soundtrack composer, and one of the things I wanted to have on his site was a calendar showing the details of his upcoming performances.  Now this is work as a favor, and most of the site is going to be pretty static, so I didn’t really want to go to the trouble of making it PHP or Ruby on Rails, or to use a web content management system like WordPress.  But I did want him to be able to update the calendar, and here was a nice, simple method, which would allow him to use a public Google calendar, which in turn would be accessed by some Javascript (using AJAX) to display a nicely formatted list of performance events.

Deconstructing the sample code, there were three main areas of interest.

  1. A URL which provided access to the public calendar
  2. Interrogation of the resultset, and formatting for display
  3. Some setting of parameters for the query to the calendar service

For the purposes of testing, I created a dummy public calendar with four entries.  The  sample code listed the URL as:

var feedUri = 'http://www.google.com/calendar/feeds/developer-calendar%40google.com/public/full';

So I thought it would just be a matter of swapping in my Google ID for the string developer-calendar%40google.com.  Nope, that gave me an error indicating that the URL was wrong.  I poked around in the calendar settings for my Google account, and found that I could get the calendar address as XML, ICAL, or HTML.  In this case the calendar ID was the easy to remember! qjufu69ca69kgimjd6s7ujjjdg@group.calendar.google.com.  So I swapped that in, and bingo! No events were displayed!  I took another look and noticed that the parameters in the particular example used a date range for the query.  I altered the end date to some time in the far future and now I was displaying my dummy calendar events.
The sample code didn’t do anything more than display the “title” of the calendar event.  My application was going to need more than that.  I wanted to get everything from the date & time to the location, and notes.  For this I was going to need to look at the javadocs, or more accurately jsdocs.  There was a link to the API Developer’s Guide from the Code Playground page, but all the examples seemed to deal with adding parameters to a GET statement, and when I tried doing this, by appending the parameters to the URL, it just caused errors.  I wanted the list of methods associated with the class CalendarEventEntry, which were being returned in an array by the calendar query.  I can’t tell you exactly how I found it (probably by doing a Google search), but I found it in the end at http://code.google.com/apis/gdata/jsdoc/2.2/index.html. Selecting the “google.gdata.calendar” package, and then the “google.gdata.calendar.CalendarEventEntry” class revealed the list of available methods.  Now, my calendar entries had data in the “Description” field, but there was no getDescription() method.  There wasn’t a getDescription() method that was inherited from any of its superclasses either.  I looked up and down the list, there was a getContent() method inherited from the root atom.entry class.  I plugged that in, added the additional method getText(), since the call returned an atom.Text object, and out popped the descriptions from my calendar entry.  Great.

Now I wanted to get the date & time for each event.  Going back to the jsdocs, and realizing that I was looking for something associated with start and end times, I found the method getTimes(), this looked promising.  Unfortunately, I went round and round in circles trying to get the start time from the gdata.When object, but to no avail.  Finally, I found an example by searching, which revealed that I should be interrogating the zeroth element of the array like this event.getTimes()[0].getStartTime().  Why?  I don’t know, but it worked, and now I was able to build a pretty good display string showing the date, title, description, and start time:

html += event.getTimes()[0].getStartTime().getDate().toLocaleDateString() + " - " + event.getTitle().getText() + ' - ' + event.getContent().getText() + ' - ' + event.getTimes()[0].getStartTime().getDate().toLocaleTimeString();

I now noticed that the events weren’t being returned in chronological order.  I didn’t want to have to sort them myself, there must be a way of sorting them in the query.  Going back to the jsdocs for the CalendarEventQuery object, I saw that there were methods for setOrderBy and setSortOrder, but my inexperience, or lack of frequent practice using javadocs/jsdocs meant that I couldn’t see the one thing that was staring me in the face.  Time to phone a friend.  My friend explained to me that the section titled “Field Summary” held the equivalent of enum or constant declarations, and that this was where the secret to supplying the correct parameters to the setOrderBy and setSortOrder methods.  These static values were ORDERBY_START_TIME and SORTORDER_ASCENDING, so by popping these, using the proper syntax, I had:

query.setMinimumStartTime(startMin);
query.setMaximumStartTime(startMax);
query.setOrderBy(google.gdata.calendar.CalendarEventQuery.ORDERBY_START_TIME);
query.setSortOrder(google.gdata.calendar.CalendarEventQuery.SORTORDER_ASCENDING);

This set the date range, and the sort order.

So now I had everything in place to implement the AJAX Google calendar on a fairly static website.