When I write about patent filings, they are usually from search engines or social networking sites, or have been acquired by someone like Google or Facebook. I ran across one patent application published this week that instead comes a person offering search engine optimization services. I don’t think it’s possible to do SEO with just automated tools, because there are so many issues on a site that need to be considered and reviewed and often adjusted for a site to be competitive in search results.
Some of those are technical, like handling canonical issues so that you ideally only have one URL per page. Some of those issues involve making marketing decisions, like understanding the audience of a site and the language that they might expect to see on a page about a particular topic. Some may involve things like deciding how the information architecture of a site might be set up, so that it’s easy for visitors to understand where they are on a site, where they can go, and how their situational and informational needs might be addressed on the pages of that site.
Bing has been showing searchers some social annotations and enhancements for a while now, if you were logged into Facebook when searching at Bing, like a friend who might have liked a particular result in Facebook. They started delivering even more social search results in a social sidebar at the beginning of this month, which they announced in their blog post, Social Meets Search with the Latest Version of Bing, Available to Everyone in US Today.
Those search results might seem like a response to Google’s “Search Plus Your World” social search results, though the Bing social results may be more social and more interactive than Google’s social results. A patent application from Microsoft published last week provides some hints as to when they might show social results to searchers based upon some different relevance signals between a person’s current query and their connections, or buddies queries that might have been related. And yes, Bing refers to social connections or friends using the term “Buddies.”
Almost seven years ago, I started thinking about what documents I would recommend that people read if they wanted to learn as much about SEO as possible. SEO by the Sea was a little more than a couple of months old, and I started a series of posts that I called the “100 best SEO documents of all time.” I started the series knowing the first 30 papers, blog posts, and patents that I wanted to include in the series, and somehow never got past those first thirty.
The posts were the three posts immediately before the gathering that originally gave SEO by the Sea its name. I went from blogger to event organizer, and never quite returned back to the series that I started. In the past couple of days, the first post got some attention on Twitter, and I promised to update the series.
The next ten documents are ones that I’ve been thinking about quite a bit after reading them, and what they might mean for the future of search.
A pending Google patent published this past week describes how the locations of entities included in queries might be identified from information found in the search engine’s query logs, based upon click histories and other information. Query log information may also be used to associate locations with websites and web pages.
Are the Empire State Building or the Golden Gate Bridge places, or are they things? A search for just [washington monument] or [eiffel tower] doesn’t actually specify a physical address. Search for the [Statue of Liberty] and chances are that you want the one in the New York Harbor, but if your search was conducted in Paris, France, you might have wanted to see one of the ones in Paris (yes, there’s more than one). There are a number of replicas of the statue world wide.
A search for [concord point lighthouse hotels] returns a number of pages that successfully point out that the lighthouse is in Havre de Grace Maryland, even though my query doesn’t mention the actual location. Is the search engine just finding the most relevant results for those keywords, or is it identifying the location of the lighthouse, and then trying to find web sites that are the best match both for the query term and the location?
When you perform some searches, Google might include Maps results within the web search results for those queries, or it might include some local results that change when you change your location in Google. Those queries are ones that don’t include geographic information within them, yet Google somehow decides that there’s some geographic relevance to the terms being searched for.
Some query terms likely have no geographic relevance to them, such as a query like [linux], which pretty much has a meaning unrelated to any specific location. Other queries may evidence an intent to find a location near a searcher, such as [restaurant]. A patent granted to Google this past week describes an approach that Google may be using to assign an implicit local relevance to a query term or phrase when that query doesn’t contain any explicit references to a location.
A friend asked a few months ago why Google might decide that a particular phrase might be seen to have a geographical relevance in his region, but not show localized or Google Maps search results in other locations. My answer was that Google likely had developed a statistical geographical model which would trigger localized results based upon a combination of query used and location of the person searching. I’ve written a few posts in the past about a Yahoo! paper on geographic intentions, as well as a Yahoo! patent covering similar territory.
Search engines are hard at work transforming the Web from a place of words to a place of people, places, and things. An Ars Technica article from earlier this month, How Google and Microsoft taught search to “understand” the Web, discusses this evolution of the web, though I think they see this trend incorrectly as one that only goes back a few years.
The first post I wrote about search engines extracting entities from webpages was in January of 2006, in Providing related links to documents. I’ve written a number more that describe how the identification and extraction of an entity from a page might be useful in one manner or another to a search engine. This is true with local search, as well as with practices that can drastically impact the composition of the search results that we see everyday. Over at the SEOmoz blog a couple of days ago, Dr. Pete Myers wrote The Bigfoot Update (AKA Dr. Pete Goes Crazy).
Google employs human evaluators to judge the relevance of web pages in search results, but according to Google’s Matt Cutts, usually only when engineers from the search engine are testing a new algorithm, and want to compare the results with the ranking algorithms that they might be replacing. (We’ve also seen that Google likely uses human evaluators to uncover web spam as well.) Matt Cutts answered a question on how Google uses human evaluators in a video filmed last month:
Google was granted a patent today originally filed in July of 2005, that describes how human evaluators might be used to test algorithms, as well as in actual live ranking systems for local search and for web search. Those evaluations of search results pages for specific queries could be used in a statistical model that might influence search results. Google may only be using human evaluators for purposes of testing search results (and finding web spam), but it’s interesting to see both the testing and ranking approaches described within a patent from Google.
A Google spokesman said in a statement that the company is always looking for better ways to help users share content and connect across the web, as in daily life. “With the Meebo team’s expertise in social publisher tools, we believe they will be a great fit with the Google+ team,” the company said. “We look forward to closing the transaction and working with the Meebo team to create more ways for users to engage online.”
Meebo started off life as an IM chat program that featured interoperability with a host of other instant messaging programs. I remember using it years ago in place of the Yahoo chat program which used to cause my computer to crash. In December of 2008, Meebo introduced the Meebo Bar, which enabled webmasters to set up chat on their website for people to use to interact with each other. The Meebo Bar also provided social sharing tools and advertising, including games from advertisers.