Common problems with Google+ Sign-In on Android

It has been fantastic to see so many people trying out Google+ Sign-In, and through the bootcamps and other events I've had a chance to talk to some people who are actually implementing it in their apps. The Android integration is pretty straightforward, thanks to Google Play Services, but there are still some issues I've seem come up a couple of times.

tl;dr: make sure your app is set up in the API console, and make sure you can handle multiple onStart events coming in during sign-in.

1. Consent screen appears more than once. 

The basic life cycle of the PlusClient looks a bit like this:


When we call the onStart, we immediately call mPlusClient.connect(). Google Play Services checks whether the user is already signed in to the application, and, if so the onConnected method is instantly called and the user seamlessly signed in.

If the connect() call fails, then onConnectionFailed() is called with a ConnectionResult object, which represents the error. The error is probably RESOLUTION_REQUIRED, which unsurprisingly means it also has a resolution. That is generally an intent that can be kicked off. The first one that is likely to be seen is that the account needs to be chosen.

Starting the resolution for that result will display the chooser activity, and when it completed our onActivityResult() method will be called.  The onActivityResult() can then call connect() on the PlusClient again. If all the errors have been resolved, the onConnected() method is called, but we'll likely get another error, requiring the consent dialogue to be displayed. Once this has been accepted, and a token retrieved, the activity result is called, we connect() again and the onConnected() call is reached.

Two things throw flies into this ointment:

a) We generally don't want to resolve on the onConnectionFailed() error until the user presses a button
b) While the resolution is happening onStart() may be called multiple times on our activity

The first means we need to care about a bit of state in onConnectionFailed(). If the user hasn't pressed the button, we need to not start the error resolution, and in fact kick that off in response to the onClick(). Once they have started though, we should resolve every error we reach.

The second complicates this, because if we just resolve whenever the result has a resolution, if onStart gets called again, we'll get another connect and start kicking off the resolution again, displaying two dialogs!

The solution is just to put a flag around the functionality based on whether we're in the middle of a resolution - you can see that as the mResolveOnFail in the code above. This defaults to off, so that when the activity starts and calls the mPlusClient.connect() it doesn't immediately display the account chooser or consent dialogue to the user. We turn it on in response to the sign-in button being pressed.

We also flip it off again when we get the onConnected() callback, so that if the user signs out we're in the right state.


2. Sign-in succeeds, but we can't retrieve any profile information.

One of the often overlooked steps when setting up Google+ Sign-In on Android is that the project must be associated with a client ID from the API console. On the web and in iOS the developer has to specify this client ID and the application will show an error if it is not given, but on Android it is inferred automatically from the combination of the Android package name, and the SHA-1 fingerprint of the signing key.

The API console is where the developer enables various services, and where they can manage the quota they have assigned. If an application runs without a matching API console project, it is effectively assigned 0 quota. This means that an un-linked application can go through the sign-in flow, retrieve an oAuth 2.0 access token for the user, but not actually be able to make any calls. So, for example, calling a lot of the PlusClient method will result in unexciting nulls instead of exciting profile data.

Of course, it doesn't have to be that someone hasn't made a client ID! It can equally happen with a typo of the packagename, or (as in one case I saw earlier this week), the system having more than one Android keystore on it - so the fingerprint was from one, but the APK was signed with another. Either way, setting up the correct details in the console will resolve it.


If you're not sure whether this is the problem that you're having, there is a relatively easy way to check. Just sign-in as normal, and in you onConnected() callback retrieve the oAuth 2.0 access token. You need to do this off the main thread, else it can cause deadlocks:
You can then enter the access token into tokeninfo endpoint:
https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=YOUR_TOKEN_HERE
This should display a client ID that matches your project. If it displays something else it's probably not set up right. If it's 608941808256-43vtfndets79kf5hac8ieujto8837660.apps.googleusercontent.com, and looks a bit like this:

{
"issued_to": "608941808256-43vtfndets79kf5hac8ieujto8837660.apps.googleusercontent.com",
"audience": "608941808256-43vtfndets79kf5hac8ieujto8837660.apps.googleusercontent.com",
"user_id": "104824858261236811362",
"scope": "https://www.googleapis.com/auth/plus.login",
"expires_in": 1353,
"access_type": "online" 
}

Then the app hasn't matched a project at all,  so you'll need to configure it in the API console. If you see the error INVALID_KEY in the logs, it may also indicate that the API console project is not properly configured - though I must admit I haven't yet quite grokked under what circumstances that one does occur.

There's a full example activity with a plethora of comments in this Android Google+ Sign-In gist, which hopefully will be useful as a reference to one way of implementing sign in.

UPDATE - one additional bug that surfaced quite a lot around the release of the latest version of Google Play Services was the onConnectionFailed method being called with a ConnectionResult which does not have a resolution, and has an error code of ConnectionResult.SERVICE_VERSION_UPDATE_REQUIRED.

As you might guess from the name, this indicates that the version of Google Play Services on the device is too low. Normally new versions will be updated automatically, but there is always a time delay in the roll out, so it is quite possible to get this error as updates are released.

You can handle this in the onConnectionFailed by calling getErrorDialog on GooglePlayServicesUtil, but the best way is to actually check whether Google Play Services is installed and up to date before even trying to connect. You can see a snippet of how to do this in the documentation.
Read More

Postmessage & OAuth 2.0


As part of the release of Google+ Sign-In, some people have noticed that signing in via the Sign In button doesn't redirect them to Google, then back to the site, as would have happened if they'd been using the basic OAuth 2.0 flows.

One of the backbones of Javascript security is the same-origin policy, which limits running code from being able to see things from sources other than its own. For example, HappyImageWebsite.com can't go and read what's happening on another window showing SecureBank.com. Sometimes though it is helpful to be able to communicate between windows, or between a window and an iframe, that are from different origins. This is tricky, and has been the source of many interesting workarounds over the years.

The HTML 5 web message specification standardised a solution to this problem, in the form of the window.postMessage() method. As you might guess, this lets you send a message from one window to another in a pretty straightforward way, even if they're from different origins. E.g.:




var cw = document.getElementById("myiframe").contentWindow;


cw.postMessage("Hello World, "http://examplea.com");



The second argument is the target origin - if the loaded window is actually from a different place, then the message will not be sent (so you don't accidentally send messages to a rogue window). Within the window, we can listen for a callback by adding an event listener.

window.addEventListener("message",


function(e) {


if(e.origin == "http://exampleb.com")


alert(e.data);


},


false);


Here we can check the source origin to make sure it came from where we expected. This handy little API is supported basically everywhere, including IE8+.

So what does this mean for sign-in? In the normal OAuth 2.0 (client side) flow

1. The application generated a URL, and redirects the user to the provider
2. The provider shows them a consent screen or similar to approve the application
3. Once the user has submitted the consent form, they are redirected back to the site with an access token that can be used to access resources

The server side flow is broadly similar, except a shorter-lived code is sent which the server can exchange for an access token. Using postMessage, we can make this experience a bit easier.

1. The sign in button can create a hidden iframe as a post message relay
2. This can pop up a window as needed for consent and sign in, with a redirect-uri of "postmessage" rather than another site
3. Once the user has submitted the consent form, the window sends back an access token via postMessage to the consuming code

This saves roundtrips for the user, and potentially makes the experience faster and smoother.  Both sides can check the origin is what they expect (so registering the origin in console of the provider is still required). As it never puts an access token in the URL, its likely more secure as well.

This technique also allows "immediate mode" checking. In this case, when the button is created (or the Javascript initialised), a hidden iframe can be created pointing to the authentication URL, and this can go away and check whether the app is authorised. This is exactly what the Google+ Sign-In button does, meaning the site just has to setup a callback, and previously authorised users smoothly sign in.

It also allows some (potentially) useful functionality such as checking the session state without necessarily having to round-trip to a server.

So, wins! What's the downside?

1. You need Javascript, and that means adding another file on your page (if you don't already have the plusone button or other Google+ widgets). There is an asynchronous loading snippet that is definitely worth using to make sure it doesn't block your page rendering, but this can still be a concern.
2. Using this flow you can still request offline access which will return an access token for use in the client, and a short-lived code for the server*. However, as you're not generating the auth URL,  you need to ensure you send a one-time code out to the client and back with the code, and check it matches. This is basically what you would have been using as the 'state' parameter in the old flow.

If you want to have a look at how this looks in practice, check out https://code.google.com/p/oauth2-postmessage-profile - this application actually uses the shindig project's gadgets.rpc for the inter-frame communication, but that uses postMessage underneath. This kind of functionality is also explicitly called out in the OpenID connect spec, specifically the bit on session management.

* The server can exchange this for both an access token and a longer lived refresh token, which it can exchange for access tokens as needed.
Read More

Google Webservice Simple API Access in Objective-C

This is as much an aide-mémoire for myself as anything else, but in case you're using the Objective-C Google API client, you may at some point need to use the simple API access method. This is where you pass an API key to identify the call as being under a certain project, rater than using a full OAuth2 authentication flow. This doesn't allow access to personal data, but does mean that any calls will be tracked under the quota for your project, rather than a (lower) IP based quota.

It turns out to be very simple. First, go to the API console and the API Access option on the left menu. At the bottom there will be a simple API access section where you can generate new keys. The "API key" value here is the one you need.


Don't worry, I regenerated the key after taking the screenshot! We can then pass the key in as a parameter to the service we create using the APIKey property. In this case I'm using the GTLServicePlus which queries Google+, and using the activities search to retrieve posts that contain the string "objective-c". The same process will work on any GTLService you're using though, as long as it is a public data API and doesn't require OAuth2 authorisation.



It's also possible to pass the API key as an additional parameter to the GTLQuery, but this is definitely the easiest method!




Read More

Programmatically Scheduling Hangouts

One pretty common request around Google+ hangouts is the ability to programmatically schedule them, and have a URL which everyone can join in the future. This is useful for being able to send out links beforehand, and make sure people are ready to go.

While there isn't a specific API for this at the moment, there is actually a workaround that makes use of the Hangout integration into Google Calendars.

In the Calendar you want to use to create hangout entries, go to the Settings page under the cog icon on the top right, and enable automatic hangout creation:


This means that any event created on that calendar will automatically have a hangout URL generated, and that can be retrieved via the API. So, to programmatically create a URL for a future hangout, create an event using the calendar insert API. This example is using the PHP client library.



$event = new Google_Event();
$event->setSummary('Future Hangout');
$event->setLocation('The Internet!');



First we set the summary and a location - the general fields we might want in the calendar entry. Of course, if we're not actually going to show this calendar entry to anyone it might not matter what these are set to!



$start = new Google_EventDateTime();
$start->setDateTime('2012-12-19T20:00:00.000-07:00');
$event->setStart($start);
$end = new Google_EventDateTime();
$end->setDateTime('2012-12-19T20:25:00.000-07:00');
$event->setEnd($end);




Next we set the start and end times of the event - you can use the hangout link right away so this isn't necessarily important, but it probably doesn't hurt to set it to the real times of the hangout. 
$attendee1 = new Google_EventAttendee(); $attendee1->setEmail('ianbarber@example.com'); $attendees = array($attendee1); $event->attendees = $attendees;
Finally, if we want to invite attendees we can, so it will appear in their calendar. 
$createdEvent = $service->events->insert('primary', $event);


When we insert the event we choose which calendar to add it to - in this case we're using primary, but if you didn't want to clutter up your main calendar you could create a special 'hangouts' calendar or similar. Primary in this case is a special ID, so you'll need to get the ID of the calendar to use via the calendarList.list API call.

As long as we have checked the "automatically add Google+ hangouts" button in the settings, the response we get back will have an entry hangoutLink:

["hangoutLink"]=>  string(98) "https://plus.google.com/hangouts/_/calendar/aWFuLmJhcmJlckBnbWFpbC5jb20.k26371tdft6a7qq6bpt4m5hrso"

We can extract this and email it out to our attendees or embed it anywhere else, even if we never expose the calendar invite to them. The full sample listing is in a gist.
Read More

ZeroMQ Pattern: Pub/Sub Data Access

One problem that comes up with some regularity is controlling access to a stream of rapidly changing data, particularly over multicast. For example, there may be a stream of updates which is being broadcast out to many users (possibly a tree of users with repeaters at certain points), but we would like to control which ones can see those updates independently of that data transmission. 

There are a tonne of ways of doing this, but one of my favourites is to take a note from the book of everyones favourite copy protection technology on DVDs and Blu-Rays. Rather than trying to restrict each users access to the data, we encrypt and freely share the data, but share the decryption key only with our approved readers.

The publisher holds a list of keys which are shared between it and individual consumers. It generates a data encryption key which will be used to symmetrically encrypt the messages as they are sent. The publisher encrypts this key under each of the consumer keys, and sends out one bundle which everyone receives. Each consumer can pull out their own code, and decrypt it with their consumer key to get the data key. 

If the publisher needs to revoke access, it simply generates a new code and sends it out to all the users except the one who no longer can receive the data. This is particularly convenient for PGM transports as it means that the publisher really can push data out without worrying too much about who is in the group, with the access management being done in a side channel. 

For the film industry this meant that each disc was effectively a message, and came with its content key encrypted under the keys for each player (or set of players) that needed to play the disc. In their case the amount of information leaked by one key getting out is pretty major, so it didn't work all that well. However, if our messages are smaller, and we're more concerned with preventing access to future data once we have revoked access rather than preventing access to past data, it's a good fit. 

As a pure example, lets look at how you might implement such a thing in the PHP ZeroMQ binding. The code in all its noddy glory is on github

First, lets take a look at the client, the consumer of the data. 


$code = openssl_random_pseudo_bytes(8);
$decode_key = null;
$myName = uniqid()


We're going to start by setting up some variables. We generate a random code for ourself for the encryption key, and a random string for the name using uniqid. Next we actually need to do some work. For this example we're just going to grab a bunch of data, and then exit. 


// Insecure key exchange! Fnord!


$ctl->sendMulti(array("ADD", $myName, $code));

for
($i = 0; $i < 10000000; $i++) {
    $data = $sub->recvMulti();
    if($data[0] == 'vital.data' && $decode_key != null) {
        echo $data[1], " ", plaintext($decode_key, $data[2]), "\n";
    } else if($data[0] == "vital.config") {
        $keys = json_decode($data[2], true);
        
$decode_key = plaintext($code, $keys[$myName]);
        echo "Code update: ", $data[1], " ", bin2hex($decode_key), "\n";
    }
}

$ctl->sendMulti(array("RM", $myName));


First thing we do is register our key with the server. In this case we're just passing the key straight across the wire, which is not great if there are middlemen snooping on us - we could use a Diffie-Hellman key exchange, or perhaps have some preshared keys or async crypto to secure this, especially since often you're implementing this because data is going across some untrusted network (such as cloud hosting). In practice, I've found that actually pre-arranging the key list is generally fine (and perhaps pushing it out through a config management tool), as consumers don't get modified very often, but for the example its easier to just fire it across another socket. 

Once we've enrolled with our ADD command, we then listen on our SUB socket for messages. If the message is vital.config we need to extract our data key and decrypt it. The data in this case is sent via JSON (which isn't really the best choice here, but was me being a bit lazy!) so we JSON decode it to get a hashmap of client identities to encrypted data keys. We look up our entry, and then decrypt the data key using the private key we shared with the producer. 

In the other case, we receive a data message. In that case we use the data key we received (as long as it has been set by then), decrypt the message, and print it out. The decrypt function is straightforward and looks like this: 


function plaintext($code, $data) {
    $data = base64_decode($data);
    $iv = substr($data, 0, IV_SIZE);
    $data = substr($data, IV_SIZE);
    return openssl_decrypt($data, CRYPTO_METHOD, $code, false, $iv);
}


Most of this is actually boiler plate because of the use of JSON in one case. We base64_decode our value, extract the initialisation vector (kind of like a salt) and the data, then use the openssl_decrypt function to decode the data. Simples. 

On the producer side, it's not much harder. Here's our main loop:
 

while(true) {
    $poll->poll($read, $write, 0);
    if(count($read)) {
        // We have new control messages!
        $msg = $ctl->recvMulti();
        if($msg[1] == "ADD") {
            $client_codes[$msg[2]] = $msg[3];
        } else if($msg[1] == 'RM') {
            unset($client_codes[$msg[2]]);
        }
        $code = openssl_random_pseudo_bytes(8);
        $data = get_codes($client_codes, $code);
        $pub->sendMulti(array("vital.config", $code_sequence++, $data));
        echo "Code update: ", $code_sequence, " ", bin2hex($code), "\n";
    } else {
        $data = secret($code, vital_data());
        $pub->sendMulti(array("vital.data", $sequence++, $data));
    }
    
    // Slow things down to give readable output
    usleep(10000);
}


In this case we are polling in case we receive control messages. If we get any in we either add a client code to our list if its an ADD or remove it if its an RM type. Once we have updated our client list we then need to regenerate our secret data code (the $code variable) and encrypt that under each one of the consumer keys for the consumers we want to allow to see it. 
Note: this doesn't take into account someone naughty adding their own code - in any realistic situation you'd want to verify the person could actually create the code they said - public key encryption could help here, or just skipping this step and having it triggered via a backchannel. 
If we don't have any control messages, we just grab the next bit of data to send (calling the vital_data function in this case) and encrypt it under our current data key. The encryption function is straightforward:



function
secret($code, $data) {
    $iv = openssl_random_pseudo_bytes(IV_SIZE);
    return base64_encode($iv . openssl_encrypt($data, CRYPTO_METHOD, $code, false, $iv));
}


We just generate an IV, encrypt the data, concat the two and base64 encode the result (again because of the JSON encoding - there's not real need to do this if just sending across ZeroMQ. The output shows the process in action: 
$ php producer.php
Code update: 0 
77e95b9e4e660344
Code update: 1 86e9c30313c07ded
Code update: 2 a0330e14c78fa18b
Code update: 3 ad0ed74fff073532
The producer just echoes whenever it generates a new key (which is printed here). 
$ php client.php Code update: 0 77e95b9e4e6603444 My important data is this: 8541531365 My important data is this: 19263273576 My important data is this: 18437864387 My important data is this: 1643275369
The client gets the data key, and can review it. A bad client that isn't in the list can just see the data as it is on the wire:
$ php badclient.php
Code update: 0
4 eD7nCjH+uhm8727kucE722s1UHdjUjJqYVhIVkFpT...
5 u8xwhZpuYejplenQ3hFjP1paYnEvN25penJyTFQ2a...
A note on performance: There is a great deal that could be done to make this more efficient, particularly in the encoding! Still, on a two year old macbook air I could comfortably push 20k+ messages per second with this setup, so even with a naive implementation this is still not too much overhead for many uses. 
Read More

Google APIs Java Client Library from Clojure

At Devoxx the other week I spoke about Clojure, and as an example looked at how it could be used to access the Google+ public data API. Because the Google+ APIs are part of the general Google APIs Discovery Service we looked at how to process and generate functions to call this library automatically (more on which in later blog posts). However, for use now the easiest way to access any Google API via Clojure is probably via the Java client libraries using the Java interop in Clojure.

There's a complete example up on Github, and it is really easy to get started. I use Leiningen to handle dependencies (as everyone should!), and because this backs onto Maven it can just pick up the Google repo which contains builds of a variety of useful libs. The project.clj file looks like this:



In this case we're bringing in 3 jars from the Java API library: the (ridiculously fast) Jackson JSON parser, the base API client library and the Google+ API service. This last part is what you would replace (or add to) to query any one of the 50+ APIs available through this service.

To actually access the service we need to setup some variables. The HTTP transport is used for fetching and Jackson for the JSON decoding. In our case we're using Simple API Access rather than OAuth, so we just need to pass our api key from the API console to a new PlusRequestInitializer. If you've not seen the Clojure Java interop before it is very straightforward - a "." at the end of the method name means call the constructor of a class with that name, and a "." at the start of a function means call this method on the object that is passed as the first argument.



You can see we call the setPlusRequestInitializer method on the plus-builder object we create on the previous line.

Now, we just need to perform our API call:



This function is very simple. We're using the threading macro just to make the chain of object calls a little more readable. First we call activities on the Plus object to retrieve the Activities object that represents available posts. We then prepare that by calling its search method and passing the supplied query string. Finally we call execute on the returned Search object, and assign the result object to "results".

We then use Clojure's map function which happily iterates over the result set returned by the getItems() method on the search result object. Each activity gets passed to the anonymous function defined there, which pulls out the title and URL into a straightforward list.

While its not the most idiomatic Clojure in the world, it is very easy to wrap the client libraries in some more familiar code, and get going with the APIs very quickly!
Read More

Vanity Metrics in Social Media

I was reminded of one of my pet social peeves as part of a conversation with one of the, astonishingly smart, Google interns. One of the most challenging aspects of social networks is that, broadly, people within companies don't really know what "doing a good job" looks like. Depending on the organisation, social media can be part of marketing, PR, customer service, a specialised department, even IT. The aim of these departments is often mis-matched with the potential or the audience the brand has across their social media, so it is difficult to create effective performance measures.

Most well managed teams create measurable goals - increase X number by Y percent and so on. However, when you chuck something as mutable as social systems into the balance it is difficult to avoid putting in numbers that qualify as vanity metrics, rather than useful data.

Vanity metrics is a term from the world of lean start-ups, and refers to numbers that people quote because it makes them feel good, rather than because they matter. An example might be numbers of links to a piece of software rather than number of customers, or the number of viewers of a page rather than number of signed up users. The idea is that there are certain key figures that reflect the business model underlying the startup, and measuring those is far more important (and often more difficult) than tracking numbers that are easier to measure (and make you feel better).

Something like circled/follower/fan count is a classic vanity metric - it will (in general) trend upwards with the growth of the network, and a lot of social teams are measured on their ability to increase this single number without any respect to what it actually represents. I've heard quite a few stories over the years of notable brands discovering that 50% of their "fans' don't know the brand has a page at all, or that that their followers are not in the demographic that the company is spending millions of pounds targeting.

That said, there is significant challenge in actually creating metrics that matter. Much like television advertising, there is value in exposure to a brand or product, and that certainly happens across social media, but is extremely difficult to measure. There is significant value to good customer service, but in general if you respond well to someone they just go away! On a network like Google+ where the default method of communication is to a limited circle of people, there may be no way for a company to tell that a user is promoting a brand with which they've had a good experience.

It's difficult to talk about social media - I've been on the side of a brands representing themselves (with Virgin), a user of a network (as myself!), and of a network provider (with Google), and it hasn't made it much clearer how to avoid this problem of focusing on numbers that are easy to measure but are of limited value. I am completely convinced that the best approach to any social system is to treat it as a series of conversations between people and to be human and honest in any communication,  but I have yet to find an ambient measure that can be easily extracted to let anyone know how they're really doing.

Classic marketing metrics like net promoter score are still an excellent way of measuring engagement with a brand, but overall I think it's really important that companies keep experimenting and evaluating, and looking for the metrics that track with what makes a difference to them.
Read More

TLS and ZeroMQ

It's pretty straightforward to use synchronous encryption over ZeroMQ - just a case of encrypting and decrypting at each end with some previously shared key. Asynchronous encryption is a bit more interesting, as it allows signing for message integrity and authenticity, as well as data hiding. There have been some good examples of crypto over Pub/Sub (notably Salt), but not a lot of examples of direct messaging.

The de-facto library for this sort of work is OpenSSL, but this has a couple of problems. The first is that usually openssl manages the TCP connection itself, which could be an option for some ZeroMQ cases, but doesn't fit if the user wanted to use a different transport, or an unusual topology. TLS or SSL also require a handshake at the start of the communication, which means we may have to send messages back and forth without there being any application data.
 
For the first part, OpenSSL includes support for usage as a filter thanks to it's BIO IO abstraction layer. Memory BIOs allow storing the data that would be written to or read from a network so that the sending and receving can be handled elsewhere. Bert JW Regeer has previously blogged about using OpenSSL in an evented environment with the model, which I thought was a great base for use with ZeroMQ. Below, and in a Github repo, I've built an example of pushing encrypted messages between two applications using ZeroMQ and OpenSSL with memory bios.

As a quick note, for this example I generated a self-signed certificate to use for the communication:
openssl genrsa -des3 -out server.key 1024
openssl req -new -key server.key -out server.csr
openssl rsa -in server.key.org -out server.key
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
The code consists of a client, a server, and a class that handles generic TLS over ZeroMQ. The client code runs in a loop as we will need to send and receive as part of the handshake process. We push application data to our TLSZMQ object, and check whether it needs to write data to the network - in our case as ZeroMQ message - or whether there is an application data to process. When we receive replies via ZeroMQ, we push that into the object. In this case we're just sending a 'hello world' message and printing the result.



The server code is slightly more complicated, as we have to initialise with our certificate details, and we want to be able to support multiple clients. As we are using a ROUTER socket, we can take the identity out of the message parts before the delimiter, and use the furthest back as the connection identifier. This means we're encrypting between client -> server, even if it's client (ssl) -> hop -> hop -> server (ssl). That said, I suspect a large number of uses of this kind of encryption will actually be going over an inner hop, with the rest unencrypted on a private network, e.g. client -> hop (ssl) -> hop (ssl) -> server.

Each identity gets a new TLSZMQ object, which is stored in a std::map keyed agains the identity. Each message that comes in we push to the appropriate TLSZMQ object (creating one if we have a new connection), then checking whether we can recv application data or whether the object needs to write to the network, exactly as with the client.



Finally, the meat of the work is in the TLSZMQ class. This class is a bit longer, so it's worth breaking it down a little. We start of with the constructors. We use two - one for clients, one for the servers. The differences are which connection methods we use - SSLv3_client_method or SSLv3_server_method (we could also use TLSv1), and then importantly we set the state. SSL_set_connect_state tells the library to reach out to a server to establish a connect, SSL_set_accept_state instructs it to expect an inbound connection. Of course, as we are using ZeroMQ we can connect or bind and start services in any order.



The constructor calls the init functions, which setup the OpenSSL library. It's split into two parts as we need to attach the certificates to the context in the server version - note that we should be just creating a context once per program initialisation, but in this case I was a bit lazy! The first section just inits the general library and loads error strings, before creating a context with the passed in method. The second section creates the BIO i/o abstractions, using the mem BIO type that allows us to treat use it as a filter. We use the SSL_set_bio function to instruct the library to use them.



The main update loop is ticked at various points by the client and server code. This addresses the communication with the SSL functionality via the BIO. We have four variables we're using to push data in and out - from the app to the library, and from the the library to zeromq. In the update loop we check for network data (e.g. data from the other side of the SSL connection) and BIO_write it, which pushes it into memory for use. If there is data from the application to be encrypted and transmitted we push it in with SSL_write. Then we call the netread and netwrite functions which handle the other parts.



Net_write_ and net_read_ work pretty much the same we - we use a buffer and read information from either the memory BIO (destined to be sent over ZeroMQ) or from the SSL (destined for the application). We loop over all the sections of the data, 1k at a time, and push it into a ZeroMQ message after ready for sending.



As part of that, we check any error messages. If we get a WANT_READ, or a NONE error we just continue. We'll hit these, for example, when we first try and write application data when we haven't completed the handshake.



Finally, we have a few functions we allow pushing data into and pulling it out of the object.



When we run these, there's enough debug output in to show the handshake. If we look at the output, we can see the -1s from the application data failing to write, and the read and writes from the BIO as the handshake messages go between client and server. The "12" written below is the application message, and the 90 is the encrypted "Got it!"
DEBUG: -1 written to SSL
DEBUG: 95 read from BIO
DEBUG: 627 written to BIO
DEBUG: -1 written to SSL
DEBUG: 228 read from BIO
DEBUG: -1 written to SSL
DEBUG: 91 written to BIO
DEBUG: 12 written to SSL
DEBUG: 90 read from BIO
DEBUG: 90 written to BIO
Received: Got it!
If we run the server, we see the other side.
DEBUG: 95 written to BIO
DEBUG: 627 read from BIO
DEBUG: 228 written to BIO
DEBUG: 91 read from BIO
DEBUG: 90 written to BIO
Received: hello world!
DEBUG: 8 written to SSL
DEBUG: 90 read from BIO
The code is a bit of a quick fix, and it doesn't handle multi-part messages particularly well. How that should work is likely to be an app-specific decision, but as a starting point just returning some sort of array of decoded parts would be a good start! Hopefully this will give anyone looking to implement something more robust a few pointers! The code is up on github.


Read More

Retrieving Comments From Google+ Events

Had a question earlier about whether it was possible to retrieve comments from an event on Google+, and the answer is yes! As long as it's public, you can grab it via the REST public data APIs.

Taking as an example tonight's London Photowalk event, we need to first get the id of the activity. This can be done by grabbing the public posts by the Photowalk account:


GET https://www.googleapis.com/plus/v1/people/111455345092279041936/activities/public?alt=json&key={YOUR_API_KEY}

X-JavaScript-User-Agent: Google APIs Explorer


The ID  "111455345092279041936" there is the ID of the Photowalk account. From that list we just need to find our event, and get the ID. You can generally see the Events posts by the provider field in the JSON of the object attached to the post.


"provider": 
"Events"
},

We can then request the list of comments for that id using the plus.comments.list functionality.

GET https://www.googleapis.com/plus/v1/activities/z13xifyx5ybui5obk04cfdywyqyfunioy1w/comments?alt=json&key={YOUR_API_KEY}

X-JavaScript-User-Agent: Google APIs Explorer

Take a look at the results over on the API explorer.

Read More

Google+ History Is Your Oyster

The History functionality in Google+ is an interesting answer to a pretty common question of "where's the write API?" It allows creating moments in a user's (private) history, which can then be reshared. It's currently in developer preview, for the express purpose of getting feedback on the API. 

One of the bits of feedback that has been acted on recently is support for setting the date of a moment, instead of just using the date on which the moment was submitted. This is done by setting a startDate field in the main JSON structure submitted to the API: 

    {
    
    "type":"http://schemas.google.com/AddActivity",
        "target":{
            "url":"https://example.com/thing"
        },
        "startDate": "2012-08-09T13:15:12+02:00"
    }

This means that things that happened in the past can now be added to a users history, which is quite convenient some application. As I happened to be renewing my Oyster (London's RFID transport card) season ticket when this was added to the API, I figured it might be fun to see if it was possible to push moments based on my Oyster journeys. 

It turns out that the Oyster site does give you a view on your journey history, and allows exporting it via CSV, from the Journey History page. 


The CSV itself contains the same data that is displayed - a date with start and end time, a string describing the journey, the charge, and the existing balance on the card. 
Date,Start Time,End Time,Journey/Action,Charge,Credit,Balance,Note
05-Aug-2012,14:30,,"Bus journey, route 36",.00,,13.95,""
04-Aug-2012,10:52,11:19,"Kew Bridge to Clapham Junction",.00,,10.35,""
This seems like a workable set of data, so the next step was to get it into the History service. Because History is in preview at the moment, the released versions of the existing Google API clients don't necessarily include code for it, so instead I checked out the PHP version from the repository. It's worth mentioning that there is ongoing work on the structure of the client, so it may be that any code here needs some tweaking if anyone tries using it in the future. 

The basic boiler plate can come straight from the various sample apps, but when creating the project in the API console be sure to add the Google+ API and the Google+ History API - and remember that at the moment you need to have signed up for the developer preview group to have access do so. Then, we request access for the scope in our OAuth setup. 


Do actually do the authentication, we then redirect the user to the auth URL we can get from the client: with $client->createAuthUrl()


And when they're returned to us we can store their authentication token in the session: 


For pushing the Oyster CSV I downloaded from their site I just used a simple upload form and passed the file location to the following function, which does all the actual work. 


First thing we need to do is create classes for the services we're going to use, and open the uploaded file for reading. 

Then, we loop over the rows of the file, using fgetcsv to parse the data into an array for us. Each moment needs to be represented as a new object, so we create that, and set the type to CheckInActivity. There's a list of the currently available activities in the documentation, and it seemed to fit best in this case to have the entry check in at the end station, or the bus route. That said, it might be nice to track the duration of the trip, or perhaps to check in to both start and end locations. Bus routes also add some slight trickiness as they aren't exactly a place - and we can't see from the data where someone got on to the bus, or where or when they got off, just that they were on a certain route.
Next up we grab the fields we care about. Each record has a single date, and associated in and out times (for train/underground journeys) or just a start time (for bus trips). Using strtotime we parse those into unix timestamps. 


Now we have to setup our moment's target URL. Moments in History are private events (by default), but generally will be referencing a generally available URL. In this case I initially used the TFL bus route pages, but decided to switch to the Wikipedia pages for routes and stations as they are a bit more visual. Because the journey string is a standard format, we can detect what type of journey it is, as change our URL appropriately, in this case looking for the "route XXX" string and using that to build the wikipedia URL. We also set the startDate of our moment to the start date and time, which should be the time the person tapped in to the bus. 


If the journey was on the train we get a little bit more information. The stations listed can, but don't always, contain other useful notes. For example, for London Victoria (one of the main train and underground stations in London) we get: "Victoria (platforms 9-19) [National Rail]". In this case we want to try and extract the [National Rail] part to allow us to distinguish between railway stations and underground stations, as they have a slightly different Wikipedia URL structure. For underground stations, we could also generate a url on the TFL site, but that does require maintaining a mapping between the station IDs and names. 

In this case we log the journey against the arrival time, as we are checking in to the end station. Finally, we insert the moment, and check for exceptions. 

At that point we can run the script and get the results in the Developer Preview UI! There's definitely many more interesting things to do with the data, but it was easy to play with!

Read More