Collaborative filtering is a way of trying to present more relevant information to users, by choosing what to show based on how other users have acted. The “You might also like” boxes on Amazon and similar are probably the most popular example of this kind of technology, but it’s applicable in recommending all kind of content outside of ecommerce, particular anything that often has user ratings, such as video or games.
What the system needs is some way of allowing users to express preferences for certain items This could be making a purchase, clicking a link, or rating the item for example. We can score the similarity between pairs of items by using the Pearson correlation coefficient which gives a value to how reliably users’ scores change together. If the value for X increase as Y increases, so a user that rates X highly also rates Y highly, even if they don’t increase at the same rate, the score will be high towards 1. If one consistently decreases as the other increases the score will be negative, towards -1, and if there’s no relationship it will be zero.
This function generates the score for two lists, assumed to be in the form array(identifier => score). It will return a score between -1 and 1.
We can use this by getting a list of all reviews and items and comparing every pair. This is an expensive operation, but doesn’t need to be done all that often - it’s certainly something that could be batched up overnight for example. What we will end up with is a matrix of scores for many pairs of products - in a production system this would be in a database. In our example data we have a range of scores for various items from different users, ranging from 1 to 5.
We can then make recommendations based on what they’re looking at. If we don’t know anything about the user, we can just recommend similar items to the one they’re looking at by taking the top scores out of the array. This can be used to suggest items on the product page itself, for example, or to suggest other videos users might like to watch after watching one.
However, if the user has expressed some preferences themselves, then we can use that to try and find something else they might liked. For example, if the user has rated two items, A, which she rated 5, and B, which she rated one, we can then work out what to suggest by looking at the similarity. If X is 0.4 similar to A, and 0.3 similar to B it gets a score of 0.4 * 5 + 0.3 * 1 = 2.3. We then divide by the total similarities to get the user’s estimated score: 2.3 / 0.7 = 3.3. If Y is 0.3 similar to A, but only 0.1 similar to B, it gets a score of 0.3 * 5 + 0.1 * 1 = 1.6, divided by 0.4 to get 4.
There are plenty of extensions possible once the data has been calculated, such as finding similar users for last.fm style neighbour suggestions. The Pearson coefficient is hardly the only similarity score as well - particular for the boolean preferences (‘buy’ or ‘click’ for example) a set overlap like the Jacquard coefficient would do the job just as well, and doesn’t involve any square roots. At a more advanced level clustering techniques could be used to provide even better recommendations, though at the cost of more complexity.