Google+ Sign-In Localisation

Had a couple of questions recently about Google+ Sign-In in different languages. While its rather common to want to use custom graphics instead of the supplied sign-in button, one of the nice benefits of using the supplied buttons on Android, iOS and the web is that they automatically adapt to the language of the user. Pretty much this post could end there, but there are a few interesting edge cases that are worth mentioning.

Javascript

On the web the Javascript Google+ buttons will attempt to choose the language based on the browser settings. This should be (mostly) fine, but if you have a specific user setting for language you can output some configuration to force the language for all buttons (including the sign-in, +1 and so on).

These kind of global parameters are generally configured in the ___gfg property of the window, which the Google Javascript API checks when it loads. You'll need to put the following before (or in the same script tag where you load the Javascript:

Which looks like this (assuming your browser language wasn't already set to arabic, in which case there will be no change!):

Android

On Android, the language is chosen by the Locale, which you can force by updating the Configuration.locale parameter (h/t to +Lee Denison for this):

All of the strings for the translation are actually part of the resources supplied with the client library (as mentioned in the docs), which means you can actually retrieve the text directly if you'd like. This gives you the ability to have a custom button, and still pick up the localised text. There are entries for the wide and short versions of the button, for example:

iOS

On iOS, its important to make sure that your app is configured for the languages you support. In the project settings, make sure you have added each language your app supports:

In general, its a bit of a pain to force the language on iOS - you could update the UserDefaults configuration, but you'll generally need to do that very early in the execution for it to work. The easiest way during testing is just to change your language in the Settings on your device or simulator. However, if you are creating your own button and just want to get the string, that's a bit easier, using the GooglePlusPlatform bundle. Note that this isn't an official part of the API, so the name or the translation string could always change - make sure to test when upgrading between SDK versions!

It is a bit easier to hint the language to be used in the consent screen, using the [GPPSignIn sharedInstance].language property - you can pass language code there, e.g "es", or "en-US".

One additional thing you might notice though is that different languages actually cause the button to render at slightly different sizes. Particularly if you're not using constraint based layouts, this could cause some unexpected results.

There is actually a bit of help in the comments of the header, where you are given the maximum dimensions for each variant of the button:


// kGPPSignInButtonStyleStandard: 226 x 48
// kGPPSignInButtonStyleWide: 308 x 48
// kGPPSignInButtonStyleIconOnly: 46 x 48 (no text, fixed size)

Make sure your app accounts for these, or inspect the frame size of the GPPSignInButton after setting the style, and you shouldn't have a problem.

Read More

Testing whether a user is signed in to Google

Recently I've been in a couple of conversations where the idea of testing whether a user is logged in to Google came up. This can be helpful for tuning the experience when presenting sign-in options: you can highlight the Google+ Sign-In button on the basis the user was already signed-in to Google, so should just need to consent. It's also one way of responding to the fact that signed-in users typically are going across search using HTTPS, so you don't get information about the search terms a user used to reach you. By highlighting the benefits of signing in, the users may choose to do that, and hence give much more ability to personalise and so on

The (slightly arcane) method for doing this is checkSessionState. This is a bit of Google oAuth 2.0 plumbing that allows cheaply checking whether things have changed without round-tripping to the server in many cases. There is a session state, which is kind of a hash of various aspects of the user's signed in status, locally stored in the calling application's cookie/localstorage and another one on the Google hidden auth iframe's cookie/localstorage. By passing the application one over to the iframe, it can check they are the same. If they don't match, it will probably require a server check, but if they do we can be sure nothing has changed in the users's auth state.

Update: there is now a less arcane method of doing this, in the form of the status parameter on the auth callback. You can read more about it in my post on sign-in status.

Because the iframe is to Google, the value of the session state there can be updated if the user signs out in another tab, or takes some other auth effecting action. This is part of what the cookie-policy value in Google+ Sign-In is controlling: the scope of the app-side session state data storage.

While we generally can't use checkSessionState for checking much without an existing session hash, we can predict one special case value: null. Null indicates that the user is not signed in, so if the session state is equal to null, then we know the user isn't logged in to Google (though if it's false, we don't know anything further about them). Below is a little example:

The code to do this is really straightforward, we just need to create a Javascript dictionary with a session_state and client_id, and pass a callback which will receive the value. We need to use a callback as if there is no session state, or the session state is invalid, we might need to round-trip to the Google auth servers.

All we do here is load the plusone.js client asynchronously, with a callback that fires our checkSessionState call on load.

Update: as noted by Tim Bray in the comments below, the checkSessionState method is definitely the preferred of the two for production use, so consider the following example as primarily for curiosity.

As part of one discussion though, I was introduced to another method of checking state I was not previously familiar with. The Google OpenID system has an extension to allow for this type of checking as well. In this case, we construct a URL with a return address of our current page, include the magic parameter openid.ui.mode=x-has-session and set the mode to checkid_immediate. This second parameter ensures the user is immediately returned, and the first will only be echoed back to us if the user has a valid Google session - if its there, we're signed in, if not, we're not!

The code is a bit more manual here, but gives you an idea. This one definitely causes a user page-reload though, so I think checkSessionState is quite a bit nicer, even if it does require loading the client Javascript.

Read More

QUIC notes: Rationale, FEC and Head of Line blocking

Having been involved with ZeroMQ for a few years now, and having taken a deeper look at messaging last year, I enjoy occasionally dipping into various network protocols. One of the most interesting recent efforts is QUIC, from the Chrome team (who are just a short bridge away from my Googley home of the Google+ team), which is aimed to provide a post-SPDY protocol to be used instead of HTTP over TCP for high performance web sites.

Things at Google tend to start with a design doc that lays out the rational and the plan for a given project, and happily the QUIC doc is openly available. It's a great read, and its worth highlighting some of the more interesting problems the team are addressing.

To start with, the document lays out the (12) goals of the project and the motivations for them. These roughly break down into two camps for me:

Usable Now

This is a protocol the team intends to deploy, and it is one that has to do the job of existing setups. That means it's got to be efficient routed, switched, and handled by the myriad bits and pieces of network hardware that sit between a server and a client. It has to offer reliability, so that apps which require reliable delivery can get it without layering their own on top. It has to be friendly to the internet, so it will back off when there is congestion, which is vital to getting good performance over busy links. Finally, it's got to be at least as secure as the current SSL/TLS setups we're familiar with, so people can trust the protocol to transport confidential data.

  • Works with todays internet
  • Support reliable transport for multiplexed streams
  • Efficient demux-mux properties
  • Congestion avoidance comparable to TCP (across multiplexed streams)
  • Privacy assurances comparable to TLS (e.g. HTTPS)
  • Reused of existing protocols whereever possible

Low latency/high throughput

These goals are the real meat of the improvements the protocol is aiming to deliver. Some of these are obvious (e.g. reduced packet count is likely to make things quicker, all else being equal), but some of them aren't, so I've picked out a few below.

  • Reliable and safe resource requirements scaling
  • Reduced head-of-line blocking due to packet loss
  • Minimal round trip costs in setup and during packet loss
  • Minimal round trips during setup
  • Forward error correction to avoid retransmission
  • Reduced bandwidth and increased channel status responsiveness
  • Reduced packet count

Minimising Round Trips

Networking technology has consistently improve over the last 30-40 years: we get faster or more capable chips, more memory in devices, higher bandwidth links, and all sorts of other improvements which result in use being able to shift more data, more easily. The one thing that stays constant in this is the speed of light - if we have to send a signal over a given distance, that's the best we can do. This means that while we might be able to send more data down a line at a given moment, the time take to get a response isn't going to improve very much. The round-trip-time (latency, or ping) is the length of time to go to the server and back, and it can be quite costly, particularly when there are high latency hops involved.

Most of the time this is just a fact of life - if you're far away from a server, you're going to have to wait a bit longer. That said, different design choices can mean it has an outsized impact on the performance of certain applications. The web as it stands is built on HTTP over TCP - a protocol which presents a reliable connection stream to the applications. When a browser needs to make a connection, it has to go through a handshake. To take a simplified exchange:

Client -> Server: SYN
Server -> Client: SYN ACK
Client -> Server: ACK
Client -> Server: (segment with http request in it)
Server -> Client: ACK
Server -> Client: (segment with http response)
Client -> Server: ACK
Client -> Server: FIN
Server -> Client: FIN ACK

In reality there would likely be many more segments (aka packets) with the different parts of the http response in, but the point is that before the client can even begin the process of rendering and displaying the response, it has to go through several round-trips to setup the connection. If the connection were an HTTPS one, there would be even more. All of this means there is a delay is doing the thing the user wanted. This tends to hurt even more on mobile devices, where to save power the radio is often turned down when not being actively used, severing any existing connections.

A lot of work has been put into TCP and HTTP to speed this up. This includes widespread ideas like pipelining, where the browser keeps a TCP connection open and sends further HTTP requests down that same pipe. It also includes more unusual techniques: Chrome's backup TCP method means it will start a new TCP connection to the server if it hasn't had a response in 250ms.

QUIC is looking at this problem in two ways. One of them is by using UDP rather than TCP. TCP is designed for reliable, ordered data. UDP is more of a fire-and-forget protocol that may or may not get the data you sent to the other side, and doesn't make any promises on the order in which it'll be received.

The second thing QUIC aims to do is merge the security layer (e.g. TLS as in HTTPS) with the transport. This means that you can save round-trips. Rather than having to establish a TCP connection then establish TLS on top, you can setup both at the same time and not have to go back and forth so many times. QUIC attempts to minimise these trips by initialising the crypto as it initialises the connection , and by making sensible assumptions such as using previously established credentials if possible.

Packet Loss & Retransmission: Forward Error Correction

One of the other goals QUIC identified was related - avoiding retransmissions on loss. As QUIC uses UDP, it has to build some method for determining whether the other side of a connection has received all the parts of a message - say all of the parts of a web page. TCP does this by having a sequence number, and having the receiving side send ACK messages back for the highest continuous number it has seen - so if it sees segments 1, 2, 3, 5 it can only ACK up to 3, as 4 has not yet made it. If packet 6 then arrives, the receiver can again send out an ACK for 3, so the sender can tell it's stuck and needs some of the earlier data to be retransmitted.

There are other methods. In PGM, a multicast protocol, receivers don't ACK data they have received, instead they look for gaps in the sequence numbers and send out a NACK, a negative acknowledgement. So, if a PGM receives 1, 2, 3, 5 it can send back a NACK message for 4, and the sender can retransmit it.

In both cases, this introduces latency: if the receiver needs all of the parts of the data to have usable chunk of information, it has to wait for the sender to work it out that it should resend, and then for the data to be delivered. In the mean time, none of the subsequent data can be delivered, even if it was usable without the missing chunk.

QUIC tries to avoid having to go back to the sender through the user of Forward Error Correction. This works by having a backup that can be used to regenerate for any one of the packets in a group - like an understudy in a theatre company who has learned the lines for 3 different roles, and so can cover for any one of them. Its spending some extra bandwidth up front, in order to introduce redundancy into the data stream. This allows receivers to fix missing packets on their own without having to go back to the sender and incur extra latency.

We can demonstrate this briefly. If we're sending a sentence as a series of packets, it might look like this:

1 IN A MAN THAT'S JUST 
2 THEY ARE CLOSE DELATIONS
3 WORKING FROM THE HEART
4 THAT PASSION CANNOT RULE

After our run of packets, we then send a forward error correction packet. To do this, we just need to XOR each byte of the previous packets (which we can do as they go out) with our error correction packet, padding the data packets out to the maximum length with zeros if necessary. So, for example for this run of 4, the first bytes would be the ASCII value of I (73), T (84), W (87) and T again: 73^84^87^84 = 30.

We then send that fifth packet. If any one of the preceding four is lost, we can XOR the rest against the FEC packet, and reconstruct it. For example, if we lost the first packet, for the first byte we would XOR T, W, T and our FEC value of 30: 84^87^84^30 = 73 - or the ASCII code for I! Here's an example in Javascript:

So, for occasionally, uncorrelated, packet loss we can easily recover the missing packets without having to go back to the server or wait for a timeout. QUIC groups its data streams into FEC groups, and sends out check packets at the end of each one.

Head of line blocking due to packet loss

Much of QUIC is based on the work done for SPDY, which included multiplexing. Multiplexing is a convenient solution to a couple of problems. Firstly, the round trip time cost for setting up a TCP and TLS connection is required for each independent connection between a client and a server,. Secondly, most browsers have a limit of the number of TCP connections that can be made to a given domain at a time - so even when more requests could be made in parallel, they wont be.

SPDY attempted to resolve this by combining multiple connections into a single TCP connection. While you can pipeline requests across a single connection, it's easy for that pipeline to become stuck - for example retrieving a slow-to-generate page while it could be grabbing fast-to-get images. Because these go through separate SPDY streams, now everything can be retrieved just as fast as it can be transmitted.

However, just because one slow request can't hold up others doesn't mean that a similar situation can't occur, just at a lower level. Imagine we have 6 streams going from client to server:

1 C <----[]----- S
2 C <-------[]-- S
3 C <---[]------ S
4 C <--[]------- S
5 C <-------[]-- S
6 C <------[]--- S

In a simple world (as in reality the process is sped-up by sending a window-worth of packets before waiting for acknowledgement), the server sends some data, then the client acknowledges the data, and the server doesn't send any more until the client has acknowledged. Now, imagine that we get some loss:

1 C <---[]------ S
2 C <------[]--- S
3 C <--[]------- S
4 C <-X--------- S
5 C <------[]--- S
6 C <-----[]---- S

Now stream 4 has to wait for the server to time out (which defaults to 3 seconds!) or to recognise the lost data, and resend the packet. If the server has sent other packets in the mean time they'll make it to the client, but they wont be delivered to the application. Bad news for connection 4, but no worries for everyone else.

Now, if we imagine the same thing in a SPDY connection, where all 6 streams are combined into one TCP connection:

1 C <-|-[]------ S
2 C <-|----[]--- S
3 C <-|[]------- S
4 C <-X--------- S
5 C <-|----[]--- S
6 C <-|---[]---- S

Because all 6 connections are now flowing over 1 TCP connection, and TCP guarantees the order, now none of the other streams are going to get their data. TCP doesn't know they're unrelated, so it holds the entire flow up until it gets the missing data.

QUIC gets around this by being based on UDP, so if one packet is lost, it doesn't stop the data getting delivered. Because UDP doesn't guarantee order, it's up to QUIC to reassemble the incoming data into useful streams - and the way streams and multiplexing is something I'll talk about in another post.

Read More

Google+ Sign-In & Multiple Applications In The API Console

Applications which access Google APIs are configured in the API console as 'projects'. Each project can contain multiple client IDs, and each client ID can represent a different variant of the application: for example an Android client ID, a web client ID, and an iOS client ID. There can multiple of each, so there might be different client IDs for two different versions of an Android application within the same API console project.

One common question is whether a developer should group their applications under a single API console project, or have separate projects for each. While it's pretty easy to see that FooPlayer iOS, FooPlayer Android and FooPlayer.com should all be under the same project, the question is what to do with a situation where there are actual differences other than platform, such as FooPlayer Pro and FooPlayer Free.

As a rule of thumb, if the different apps provide similar core functionality, they should be one project. For example, if an application has a free and a pro version, or if there are multiple country specific versions of an application or site, they should be a single Google API console project. On the other hand, if the applications have different brands, then that's usually a good sign they should have separate API console projects.

To works out what's best for your app though, it can also be helpful to think about the tradeoffs of having a shared project versus separate.

The benefits of a shared project:

Cross client SSO

If a user has signed in on the web and subsequently opens an Android application from the same API console project then they will be seamlessly signed in to the Android app (presuming they're authenticated on the device with the same Google account). The same is true in the opposite situation, starting on Android then moving to the web. This does require that the OAuth 2.0 scopes and app activity types requested in both apps are the same - if not, that tends to be a smell that the applications should really be in separate projects.

Shared quota

Most apps won't need any extra quota for the main Google+ Sign-In APIs, but if using non sign-in APIs or other Google services they may need extra quota allocated. This is done at the project level so all client IDs in that project pull from same bucket, which is easier than getting approvals for quota increases multiple times.

All insights in one place

Google+ platform insights are enabled by linking a Google+ page with a project, and will give statistics based on the activities written and sign-ins for all client IDs in that project. If that sounds like a good thing, that's a good indication the apps should be in the same project.

The downsides of a shared project:

Deep link application ordering and installs

When a user shares from an application to Google+, the share can include a deep link ID. When the reader taps the link in the Google+ application on iOS or Android, they are taken straight to the appropriate Android or iOS application, or to the Play or App store if the app is not installed. If there are multiple apps in the project for either platform, then each of these behaviours will depend on which are installed, and the order they are defined in the API console. So, lets say the project contains:

  • Match-3-tastic!
  • FooPlayer Free
  • FooPlayer Pro

If user A shares from FooPlayer Free, and user B taps the link, then user B will be taken to the first app that they have installed based on the order in the API console. If user B had FooPlayer Free and FooPlayer Pro installed, they would be taken to FooPlayer Free. If they didn't have any installed the same order is followed, they would be taken to the install page for Match-3-tastic. If the apps are all able to handle the deep link, this is an indication they should be in separate projects.

OTA Android installs

If an application has multiple Android applications, only one can be specified in the apppackagename parameter on the web sign-in button to be installed over the air. If that app is installed, the user won't be prompted to install one of the others in the project, even if they don't have it.

Single API console branding

When the user signs in on the web or iOS or goes the app management page in Google+ they will see an app name and image taken from the branding section of the API console project. If that branding doesn't fit the application they were using they might not trust the experience - in general, that's a pretty good smell that the applications should be in different projects.

App activity sources

If the user views their app activities, or those app activities are surfaced on Google, they will all appear to come from the same app (whatever is configured in the branding settings in the API console). If Match-3-tastic and FooPlayer are writing very different kinds of activities, that could be confusing.

Some ways of working around these issues:

Deep link application ordering

If a user has both FooPlayer Free and FooPlayer Pro installed, then a deep link might go to Free when it would be better to go to Pro. Apps can give a better experience here either by standardising the deep link handling in a library (so that there is no effective user experience difference), or by trampolining requests from one to another. It's relatively straightforward on both Android and iOS to see if another application is available, and to redirect the user to the preferred application if so. On Android this would just be firing off an Intent, while on iOS the app would use openUrl.

Sharing users across multiple application

If some applications are part of a different API console project, that doesn't mean they have to be totally separated. Most applications that implement Google+ sign-in also create a user in their own account management system, and associate the Google+ login with that account. This means the use of accounts can be separated from the project used to login with.

As an example, lets say Lets say we had 3 different games, each with their own project:

  • Match-3-tastic
  • Sparklefarmer
  • Battlefield of Combat (War Edition)

All three could talk to the same set of back-end services. If I sign in to Sparklefarmer with Google+, that can create a user account associated with my Google+ profile ID. If I then sign in to Battlefield of Combat, I can look up the account using the same profile ID, and simply mark that I am an active player for both games. That way I can still have a merged user account, but keep independent Google projects, and appropriate branding.

That said, what about seamless sign-in? That's a really nice feature, and if we're just using Google+ Sign-In for identity it would be great to have that flow across. Well, with Android we can actually do that, thanks to the magic of ID tokens.

If a user has signed in to Sparklefarmer, they have associated an account with their Google+ ID. When they log in to Battlefield of Combat on Android (and have granted the app the GET_ACCOUNTS permission, of course), we can retrieve ID tokens for each of the accounts on their device, which we can pass to the back end. These are cryptographically signed tokens which include the Google+ user ID, so we can check whether we have a user record in our shared backend, without having to have the user interact with us other than starting the app!

This will allow the user to be signed in to their shared game back end account, but wont sign them in to Google+. This is not so bad though: it can be a nice experience to have the app immediately recognise you, and then later prompt to connect Google+ to allow retrieving friends, writing app activities, or whatever other social features are needed.

The data in ID tokens, when stripped apart and decoded, look like this:

{
"iss":"accounts.google.com",
"sub":"113340090911954741696",
"email":"notarealaddress@gmail.com",
"aud":"366667191730-2djgro2rqr5ro230vio055gb8qr5h3ue.apps.googleusercontent.com",
"iat":1373537811,
"exp":1373541711
}

Note the "sub" value there - that's my Google+ ID. The "aud" is the client ID which it was granted for, and the the exp is a timestamp after which the token should no longer be used. Note that you should never try to verify these by hand, but use a client library instead - there's much more on this in Tim Bray's blog post on verifying calls to backends from Android apps.

To retrieve it, we'll need to grab the account names of any Google accounts on the device, which we will then fire off calls to the getToken method of GoogleAuthUtil.

To retrieve the token, we need a client ID for a server component created in the same API console project as your Android client ID - this will fill the 'aud' or audience field in the ID token, and you'll want to check that on the backend to make sure it was generated for app you expect (in our case: Battlefield of Combat). Because this can block, it must be done off the UI thread!

Note that you must send these tokens over HTTPS! Even though they can be verified as having been generated by Google, there's no way of telling them who actually sent them, so if they leak it is possible an attacker could do something naughty. They do have a limited lifetime, and all verification should including checking the expiry time, but it's best never to send them in the clear.

Read More

Who Are You Anyway?

Social sign-in adds an extra twist to sign-in on the web. While systems like OpenID are often used purely to assert identity (e.g. you are the same person as when you came here before), OAuth and OAuth 2.0 were always about granting access to data (e.g. you give me permission to know who your name and friends). While both of these get to the same place for most developers - someone can log in, and you can reliably and securely know which user in an application they map to - the difference is largely about what other data is available.

Most social sign-in systems grant access to profile information, such as name, gender, email address, age or age range, and other more specific information. They often also grant access to a users activities on the identity provider, either explicitly or implicitly: for example if I sign in with Google+ you can retrieve a list of the people I have circled (or at least the ones I have given you access to), or if I sign in with Twitter you can easily get a list of my tweets. These are powerful extra capabilities, and allow relying parties to customise the user experience to better fit their users - and hopefully give them a richer, better tailored experience.

The challenge with this tailoring is that we are always only looking at a single facet of the users personality at the time. If we retrieve any actions from a social network, we're only going to see the kinds of things the user wants to share or perform on that network. Sometimes the distinctions are fairly predictable - it's easy to imagine a given person might have more professional interactions on LinkedIn - but many other cases may not be so obvious, and will depend on the kind of friends and connections people have on various networks.

This is one of the reasons that giving users choice of identity providers can be valuable, and why it can be tricky to ask users to connect multiple providers: there may be a mistmatch between their usage and viewpoints of the different services.

One of the interesting side-effects of Google+ Sign-In is that it is easy to add scopes for any other Google service, and there are enough of them that you can actually hit these same issues of tailoring content depending on the services you have access to.

As a quick experiment, imagine we were suggesting interested based on activity on a sign in provider. This is a common case for many networks that want to to suggest content streams to follow. We might do this by looking at two easily accessible sources - my Google+ posts and my YouTube likes. In this case I haven't done anything particularly clever with either - just grabbing a pageful of results, and looking for the most popular nouns.

My YouTube likes have words like: google, web, rancilio, app, silvia, i/o, prairie and dogs. My Google+ posts: google+, sign-in, google, developers, php, people, version. Even in this little sample for two different services I use quite closely together, different things are appearing - my Google+ stream clearly has more work-related posts (Google+ Sign-In for example), while my coffee machine makes an appearance in my YouTube list*.

This means that if an application was trying to form a picture of my interests in this, it might fail. Being too general can mean missing out on the compelling content that gets me to use the application, which is exactly what we try to avoid by building these types of systems. It may be best to try and look at just one source, and focus on key interests from there.

If you want to try it, you can do so here - it's just some Javascript, so there's nothing that will Snowden your results or similar.

In case you're interested what the code is doing, we first need to request access to the YouTube readonly and Google+ login scopes by setting our sign-in button scopes parameter to "https://www.googleapis.com/auth/youtube.readonly https://www.googleapis.com/auth/plus.login". Then we can query the relevant apis: plus.activities.list for Google+ posts, and playlistitems.list for YouTube, though we have to call channels.list for the user first to get the channel ID for our list of liked videos. We extract out the title and description for each, and send it to our processor:

Finally, we take advantage of JSPos, a simple part-of-speech tagger written in Javascript. Part of speech taggers attempt to assign each word in a sentence to a grammatical part of speech: verb, noun, adjective and so on, based on a list of mappings and some transform and positional rules. In this case, we're using that on each sentence we get. We're only looking for nouns, to give us a rough summary of what the text is talking about:

Finally, we count those up, and emit the top most common:

When creating sign-in and requesting access to services, allowing users to control what is being considered and processed can be very helpful. For an interests based example as above, before signing in with Google+, the application could ask the user if they would also like the app to extract interests from YouTube.

If the application used multiple functions from a provider - such as extracting interests but also finding friends - offering users the chance to just take one function could mean they would be more comfortable connecting accounts. Again, the application could give them an option when connecting to say "find friends" and "tailor content", and allow users to choose either or both. That way the user gets to stay in control of which face they present to the application, but the application gets the benefit of connecting the accounts.

* As does a video about prairie dog language which happens to have a particularly long description. This is why scoring is generally done with something slightly more sophisticated than term frequency!

Read More

Common Issues With Google+ Sign-In On iOS

With everyone's hearts all a-flutter over the prettiness of iOS 7 from WWDC, I thought it would be a nice moment to summarise some potholes I've seen people trip over while implementing Google+ Sign-In on iOS. While overall it's pretty straightforward, there are some things that can make life a little tricky. However, for reference I've also put up a simple gist of a sign-in implementation that includes an AppDelegate and a ViewController with the sign-in button on.

In this case though, we'll take a look at some problems that might bite you during development, and some that might hit later on.

Forgetting the resource bundle

When you include the Google+ iOS SDK, you need three files: GoogleOpenSource.framework, GooglePlus.framework, and GooglePlus.bundle. If you forget the frameworks you're likely to get a big obvious compile error about not being able to find the classes you want to include, but the bundle can be a bit more subtle. It contains the translations for the supplied GPPSignInButton, and it includes the images which that button uses. If you don't include it everything will still compile - you'll just get an invisible button (which looks like the button hasn't loaded at all), as it can't find the images.

Forgetting -ObjC linker flag

The GoogleOpenSource.framework contains a number of files from the Google Toolbox for Mac, a very helpful collection of open source utility classes and libraries that are used extensively in Google libraries and apps. Several of these are implemented as categories, adding functionality often to foundation classes. Because of the vagaries of Objective-C linking, these category references don't necessarily cause their defining classes to be pulled in from the static libs that come with the SDK. This means that you get odd "unrecognized selector" errors around methods like gtm_httpArgumentsString.

The solution to this is to add the -ObjC flag to the "Other Linker Flags" in the project's Build Settings. This instructs the compiler to pull in the code for these categories, and everything can proceed smoothly.

One important point here is that the flag is case-sensitive: missing the capital O or C is pretty easy to do, and will result in the same kind of error.

Not setting up a callback URL - or having it slightly wrong

A number of operations in the SDK involve the user coming in from another application. These include sign-in, where the user will be redirected out to the Google+ app or browser to sign in, then directed back to the application; sharing where the user is taken out to the browser and then back; and deeplinking, where the user is sent to the application from the Google+ app. To do these things, the app needs to have defined a custom URL scheme, and that custom URL scheme needs to be registered in the API console. That custom URL scheme is based on the bundle ID of the application.

While failing to do this is fairly obvious (e.g. the user is left in Safari, Chrome or the Google+ app after the operation rather than being redirected back to your app), one issue that often trips people up is getting the bundle ID slightly wrong! It's very easy to have setup a client ID with a small typo in the bundle ID, that then doesn't match what is generated from the application itself. Always check that field closely if you have any errors around redirecting. Of course, once you redirect the call needs to go somewhere, leading to...

Not registering a openURL handler

Even if the bundle ID is right, it's really important to make sure that the GPPUrlHandler is defined to handle the call. This is a new-ish helper that manages routing openURL calls to GPPSignIn, GPPShare and GPPDeeplink depending on the type of the call. It returns a BOOL so you can easily check it if you have other openURL handlers.

Not using trySilentAuthentication

If you've ever wondered how to avoid having the user sign-in each time the app opens, trySilentAuthentication is your friend. If the user has already signed-in, the SDK stores a keychain entry for that, and (most of the time) calling trySilentAuthentication will fire the finishedWithAuth:error: call on the sign-in delegate in short-order.

The trySilentAuthentication call will return a BOOL to indicate whether it has a stored credential. If it does, you'll probably sign-in succesfully, but there is always the chance that the user has disconnected your application from the Google+ side, leading on to...

Not handling errors based on disconnect

Offering disconnect in an app is really important for Google+ Sign In. It's also pretty easy from the app itself: [[GPPSignIn sharedInstance] disconnect]. The tricky thing tends to be managing disconnects that have occurred in other applications, or (even more challenging) from the Google+ apps management page directly.

As of version 1.3.0 of the SDK, if there is no access token a new one will be automatically fetched, and finishedWithAuth:error: will be called with an error if the refresh fails. However, there is an edge case where the app is disconnected, but is still open on the user's iOS device. In this case, the access token will appear valid, but calls will fail.

To guard against this problem, you can check the error.code coming back in the completion block from API calls - 401 indicates that the call was unauthorised and will usually mean a token has been revoked - it's generally a sensible move to reset the user to a signed out state by calling [[GPPSignIn sharedInstance] signOut].

Not adding files for other APIs

It's common to want to use multiple Google APIs in one project, but just because the Google+ SDK contains a GoogleOpenSource framework, that doesn't mean it includes every class used by every other API. It's important to make sure to add any dependencies those APIs may have - for example if using the Google Drive SDK, GTMHTTPUploadFetcher is needed in order to upload files to Drive.

Forgetting to add a consent screen icon

You may have noticed that in the latest Google+ iOS app there is a place to actually manage your connected apps, and that the apps have an icon displayed there:

Just like on the consent screen, this icon is taken from the Branding Settings under your project in the API console. If you haven't uploaded an image there, you'll get some rather dull grey squares.

Hopefully you can avoid these issues in your own apps! If anything isn't clear check out the gist and the official documentation, and don't hesitate to ask a question in the Google+ Developers Community, or on the Stack Overflow tag!

Read More

Deeplinking Into The Google+ Apps

While setting up your application to receive deep links from the Google+ apps on web, Android and iOS is pretty well documented, it's not necessarily obvious that you can deep link into the Google+ apps on Android and iOS.

On Android, the Google+ app registers intent filters for (most of)the regular http://plus.google.com/* URLs. From version 4.4 of the Google+ iOS app, it also registers (again, most of) the web URLs, but with the custom gplus:// scheme. If you're reading this on one of those devices right now, you should be able to try these out.

Actually using these from your app is straightforward:

On Android, the easiest way of starting out is to just fire an intent for the desktop URL, which the Google+ Android app registers a filter to handle:

However, because we're using a regular web URL, the default action will be to give us a chooser, which is probably not the ideal result.

We can fix this by adding the package name to the intent, which will open Google+ directly:

However, if we do that and the user doesn't have the Google+ app installed, they're going to have a bad time. We can check in two ways: either by using the PackageManager, or (if we're integrating the Google+ SDK) using the GooglePlusUtil that comes as part of the SDK:

For iOS, we can use the UIApplication class to test whether the Google+ app is available, using canOpenURL:, and then make a call to openURL: with the gplus:// based URL to start the app:

One small caveat is that only numeric profile IDs work on the gplus:// URLs, so make sure not to use a +VanityUrl if linking to a profile!

Read More

Google+ iOS SDK 1.3.0

Google I/O was pretty busy for Google+ all round, and that includes from the point of view of anyone developing on iOS: we had (at my count) at least 25 iOS apps appearing from various partners at Google+ sandbox, there were a bunch of great questions coming our way at the developers sandbox, and on top of that my friends +Silvano Luciani and +Xiangtian Dai presented on integrating Sign-In, which you can watch on YouTube (or here!):

One bit of news that might easily have been missed was that there was a new version of the iOS SDK released, version 1.3.0. This was a pretty small release for features, but incorporates a lot of feedback from developers, and addresses a couple of common issues.

First, and possibly most helpfully, the various components have been packaged up as frameworks. There are now three packages in the SDK:

  • GoogleOpenSource.framework - the open Google Toolbox libraries, and the Google+ services
  • GooglePlus.framework - the headers and library file for the Google+ iOS SDK
  • GooglePlus.bundle - the translation strings and image assets for the Google+ Sign-In button.

This means adding the files is just a case of dropping in those frameworks, but it does mean that when upgrading you have to change imports to refer to the classes inside the framework name, e.g. #import <GooglePlus/GooglePlus.h>. Helpfully, that GooglePlus.h header includes all of the files you need, so unless you're keen to minimise links, you can just drop that in and forget about it.

The second feature is something I had completely missed, and only realised from watching XT demo the functionality! Rather than having to construct a GTLServicePlus, and pass it an authenticator, you can now actually get that directly from the GPPSignIn singleton via the plusService property:

Note that you might get yourself into trouble if you try this with a GPPSignIn that is not signed in - the authenticator wont be set, and you'll make unauthenticated calls. In the future, the team may have the library return nil for the plusService if its not authenticated, which should hopefully make this more obvious!

The third thing that was made easier was a common question from developers - how can I make sure I have an access token, to send to a server for example? In the background, the SDK holds a (hour-long) access token and a (long-lived) refresh token for you, and after the user interactively signs-in both are available. However, because the access token is relatively short-lived, only the refresh token is written to the keychain. This could cause a bit of confusion: when the user starts an app again and is signed-in smoothly with the trySilentAuthentication method, the sign-in delegate would get a callback, but then when it tried to grab the access token, the response would be nil as the SDK has only checked for the presence of the refresh token.

This would be taken care of automatically when calling GTLService* functions, and it was always resolvable by calling the authorizeRequest: method on the GTMOAuth2Authentication object (for example, with 'nil'as the request) to force a token to be generated, but that was not entirely obvious! In version 1.3.0 and up that is handled for you, so you can just grab the token and carry on. This also has the nice property that if the user has disconnected your app, then on calling trySilentAuthentication you'll receive a callback to didFinishWithAuth:error: with the error argument set appropriately.

Read More

Batching calls to Google APIs (Javascript)

One of the benefits of having a standardised API layer across all the (recent) Google APIs is that a bunch of features come for free. One of these handy items is batching, which is generally pretty easy to do.

For example, an awful lot of Google+ Sign-In implementations retrieve both the signed-in user's profile information and the collection of friends the user shared with the app when the user connects. This generally necessitates two calls, two connections, and two lots of overhead, but can be easily combined into a single request.

If you take a look at the Google+ Javascript QuickStart you'll see there is a profile function and a people function, each making a call to gapi.client.plus.people.something, and then request.execute. We can replace that with a single function that combines both, and looks a little like this:

All we've done here is created each request, and a new RPC batch with gapi.client.newRpcBatch(). It's called RPC batch as we're actually using the JSON RPC endpoint here - every API exposes both RPC and REST-style URL structured endpoints automatically, but the RPC one is a little easier to work with for batching.

We then add each of our requests (even though they're to different parts of the API) to the RPCBatch, and associate the callback with them. The other parameter we could pass in there is 'id', which would allow us to write a single callback function and pull out the requests we wanted if that was easier. One thing worth noting is that the callbacks are slightly different - in the batch version, the response we receive is a object containing the request ID and a 'result' object, which is equivalent to what we would have received when calling directly. This means there's an extra line of code to unwrap that, but otherwise the functions are the same as in the quickstart.

Finally, we just call rpcBatch.execute() and our callbacks are fired. For an idea of the savings, in a completely unscientific test I tried the standard quickstart and found the average time to fetch profile was around 180ms, and friends 500ms. With the batching operation, I found that the total was around 500ms - the friends was completely covering the time of the profile retrieval.

Where this type of thing really helps is in situations like on mobile, where establishing a TCP connection can really be painful. You can read more about the RPCBatch options in the Google APIs Javascript Client Library documentation.

Read More

Retrieving The Signing Key Fingerprint on Android

This post is a bit of an aide-mémoire for myself. If you ever need to see which key signed an APK (for example to compare to a client ID in the API console when implementing Google+ Sign-In) you can actually extract the cert from the APK, and test it.

First you need to unzip the APK:

    unzip ~/my-app.apk

You're going to see a bunch of files extracted, including a CERT.RSA, which is usually in META-INF. If you use an alias for your key, it'll be THAT-ALIAS.RSA.

    inflating: META-INF/MANIFEST.MF
    inflating: META-INF/CERT.SF
    inflating: META-INF/CERT.RSA

You can then output the signatures for the certificate with the keytool app:

    keytool -printcert -file META-INF/CERT.RSA

This will print out the various fingerprints, and let you know the details of the certificates owner - handy for checking whether it was accidentally signed with a debug key (which will look something like this):

    Owner: CN=Android Debug, O=Android, C=US
    Issuer: CN=Android Debug, O=Android, C=US
    Serial number: 4f963ac8
    Valid from: Wed Apr 27 12:43:33 BST 2012 until: Fri Apr 20 12:43:33 BST 2042
    Certificate fingerprints:
        MD5: 84:9E:5D:C5:2C:F5:1A:D5:29:B5:D1:28:DF:1A:6D:86
        SHA1: 12:65:36:81:D2:8C:B3:7D:9E:48:55:66:DF:DD:1B:3D:6B:EC:E8:E9
        Signature algorithm name: SHA1withRSA
        Version: 3
Read More