The future of search
It started with a Digg comment that I made regarding a quote from Bill Gates that "search is pathetic."
Search is at an interesting crossroads right now - there are two competing schools of thought about the future of search. In one corner is Google, who believes that computer algorithms are the most efficient way to index the world's information. In the other corner is Yahoo!, who believes that humans are best at organizing the world's organization. (Yahoo! has recently purchased Flickr, a social networking site for photos, and Delicious, a social bookmarking site). The rise in social networking sites like Digg, Metafilter, and Memeorandum show that humans are best at pushing the most interesting (not necessarily the most relevant) information to the top. This begs the question, "Why don't we just hire a bunch of full-time editors to organize the world's information?"
This was attempted through DMOZ (amusingly, Yahoo!'s original concept was a human-edited directory of the web!), which Google purchased. For a while, DMOZ entries were given greater weight in the Google index, but DMOZ and its weight in the Google engines was cut out a long time ago.
The problem is that a few people having that much control over search results (whcih is a lucrative business) are going to become biased (if you were getting paid $100 to push up a result you knew was bad, could you resist?).
So the new idea is that if the masses can push up a certain result, then it's probably the most relevant result, and there is less chance of bias.
The problem, as it stands, is that the data set is far too small. Humans are good at pushing relevant items to the top, but it's still very hard to categorize this information. If I wanted the best site to demonstrate video photography effects, that item would already have to exist.
In a sense, it's the same short/long tail problem that's plagued the web since the beginning. Automated tasks are very good at pushing up the long tail, but the short tail is probably better left for humans to moderate (otherwise a search engine should probably always search Wikipedia first to see if an article exists, then push up the rest).
I'm not sure which way is right - I think a combination of both could be quite powerful - take into account the way users like my friends have tagged a certain link, then weight that in with a computer algorithm. Google's latest Coop program is an example of this - they are trying to factor in human experience into the results generated from an algorithm.
The future of search is going to involve a human element - and that will yield more personalized, better results. This is where the future of search lies, and this is why companies like Yahoo! and MSN still have a fighting chance - as long as they can support standards (I guarantee you that nobody wants to sign up for just ONE bookmarking service) like OPML, they can simple start tweaking their existing algorithms with the data. The big question is, will they?
Comment with Facebook
Want to comment with Tabulas?. Please login.
Darren McLaughlin (guest)
I have to tell you: I tried the Google Co-Op thing and it left me cold. I can't imagine many people using that thing.