A pending Google patent published this past week describes how the locations of entities included in queries might be identified from information found in the search engine’s query logs, based upon click histories and other information. Query log information may also be used to associate locations with websites and web pages.
Are the Empire State Building or the Golden Gate Bridge places, or are they things? A search for just [washington monument] or [eiffel tower] doesn’t actually specify a physical address. Search for the [Statue of Liberty] and chances are that you want the one in the New York Harbor, but if your search was conducted in Paris, France, you might have wanted to see one of the ones in Paris (yes, there’s more than one). There are a number of replicas of the statue world wide.
A search for [concord point lighthouse hotels] returns a number of pages that successfully point out that the lighthouse is in Havre de Grace Maryland, even though my query doesn’t mention the actual location. Is the search engine just finding the most relevant results for those keywords, or is it identifying the location of the lighthouse, and then trying to find web sites that are the best match both for the query term and the location?
When you perform some searches, Google might include Maps results within the web search results for those queries, or it might include some local results that change when you change your location in Google. Those queries are ones that don’t include geographic information within them, yet Google somehow decides that there’s some geographic relevance to the terms being searched for.
Some query terms likely have no geographic relevance to them, such as a query like [linux], which pretty much has a meaning unrelated to any specific location. Other queries may evidence an intent to find a location near a searcher, such as [restaurant]. A patent granted to Google this past week describes an approach that Google may be using to assign an implicit local relevance to a query term or phrase when that query doesn’t contain any explicit references to a location.
A friend asked a few months ago why Google might decide that a particular phrase might be seen to have a geographical relevance in his region, but not show localized or Google Maps search results in other locations. My answer was that Google likely had developed a statistical geographical model which would trigger localized results based upon a combination of query used and location of the person searching. I’ve written a few posts in the past about a Yahoo! paper on geographic intentions, as well as a Yahoo! patent covering similar territory.
Search engines are hard at work transforming the Web from a place of words to a place of people, places, and things. An Ars Technica article from earlier this month, How Google and Microsoft taught search to “understand” the Web, discusses this evolution of the web, though I think they see this trend incorrectly as one that only goes back a few years.
The first post I wrote about search engines extracting entities from webpages was in January of 2006, in Providing related links to documents. I’ve written a number more that describe how the identification and extraction of an entity from a page might be useful in one manner or another to a search engine. This is true with local search, as well as with practices that can drastically impact the composition of the search results that we see everyday. Over at the SEOmoz blog a couple of days ago, Dr. Pete Myers wrote The Bigfoot Update (AKA Dr. Pete Goes Crazy).
Google employs human evaluators to judge the relevance of web pages in search results, but according to Google’s Matt Cutts, usually only when engineers from the search engine are testing a new algorithm, and want to compare the results with the ranking algorithms that they might be replacing. (We’ve also seen that Google likely uses human evaluators to uncover web spam as well.) Matt Cutts answered a question on how Google uses human evaluators in a video filmed last month:
Google was granted a patent today originally filed in July of 2005, that describes how human evaluators might be used to test algorithms, as well as in actual live ranking systems for local search and for web search. Those evaluations of search results pages for specific queries could be used in a statistical model that might influence search results. Google may only be using human evaluators for purposes of testing search results (and finding web spam), but it’s interesting to see both the testing and ranking approaches described within a patent from Google.
A Google spokesman said in a statement that the company is always looking for better ways to help users share content and connect across the web, as in daily life. “With the Meebo team’s expertise in social publisher tools, we believe they will be a great fit with the Google+ team,” the company said. “We look forward to closing the transaction and working with the Meebo team to create more ways for users to engage online.”
Meebo started off life as an IM chat program that featured interoperability with a host of other instant messaging programs. I remember using it years ago in place of the Yahoo chat program which used to cause my computer to crash. In December of 2008, Meebo introduced the Meebo Bar, which enabled webmasters to set up chat on their website for people to use to interact with each other. The Meebo Bar also provided social sharing tools and advertising, including games from advertisers.
Will Google offer a life story styled timeline similar to the Facebook Timeline? It’s possible.
Google acquired three pending patents and a granted patent that were originally assigned to WisdomArk, Inc., were transferred to Lifescape LLC, and then to Timecove Corporation. The patent assignment to Google was executed on May 12th, and recorded on June 1, 2012. The organization appears to have started a couple of websites, including Our Story and MyTimeCove.
Here’s a preview of ourstory from the front page of the web site:
Google Glasses have the potential to make a growing number of types of visual queries that are possible under Google Goggles into an important aspect of the future of search and SEO. They also may make advertising using location based services much more effective. Are you planning ahead?
Over the last three weeks, we’ve been seeing a stream of patents granted to Google involving their heads up display device, Project Glass. These include design patents, and utility patents that hint at things like a touchscreen on the side of the glasses, sonar sensors built into them, a visual display of sounds around the wearer of the glasses including direction and intensity. I wrote about the first two batches of patents in Google Glasses Design Patents and Other Wearables and More Google Glasses Patents: Beyond the Design. Google was granted another related patent this past week titled Methods and devices for augmenting a field of view this week, which “augments” the field of view of human beings by helping things that might be of interest stand out, even if they are beyond the normal view of a person in terms of distance or outside of a 180 degree peripheral viewing field.
“All mushrooms are edible; but some only once.” ~ Croatian proverb
Google was granted a patent today that could be used to collect a seed set of data about features associated with different types of mushrooms, to “determine whether a specimen is poisonous based on predetermined features of the specimen.” The patent also describes how that process could be used to help filter email spam based upon the features found within the email, or to determine whether images on a page are advertisements, or to determine categories of pages on the Web on the basis of textual features within those pages. The image below, from the patent shows how features about a picture such as height, width, placement on a page, caption, and so on might be examined while determining whether or not it is an advertisement: