Representatives from Google announced recently that they would no longer be updating their PageRank toolbar annotations for web pages. Google had been updating those 3-4 times a year for over a decade.
Does this news indicate that Google is no longer using PageRank, or that PageRank has changed in some significant way? (The ranking signal isn’t the toolbar annotation itself, which was too infrequently updated to be an accurate reflection of what PageRank might have been for a page)
Could it be a sign that Google has found something different?
A compositional query may be aimed at providing a data point to identify another related data point as an answer or solution to that query.
For example, the following two queries are compositional queries and focus upon an answer at a fixed location or a fixed point of time:
[Starbucks near San Francisco Airport]
[Films shot during World War II]
The approach in this Google patent can involve determining a first entity type (the San Francisco Airport or World War II from our examples), a second entity type(Starbucks or Films), and a relationship responding to a compositional query (Starbucks near the airport, and Movies filmed during WWII).
We’re used to search engines matching the keywords we query, returning pages that contain those words.
But what if search engines worked differently?
It seems like search engines are starting to do that, with more direct answers to searches that show up as a fact, appearing at the top of a set of search results in an “answer box”. And those questions are sometimes something more than just “what’s the weather like in Warrenton, Virginia?”
Instead of indexing pages on the web, and what those pages contain, search engines can be used to search other data sources, such as a data graph. Or knowledge bases at Google, through Google’s Knowledge Graph.
There’s a “Greenway park” near me, built where a train had previously roamed for over 100 years. The park is narrow and not much wider than the width of railroad cars. It cuts a nice path for local residents to use to walk across town, and it’s a relaxing trail to parks and schools and to walk a dog. Which is good since that rail mode of transportation has been replaced mostly by the automobile.
Former Google search engineer Andrew Hogue, now the head of search at Foursquare, was in charge of a project at Google called the Annotation Framework project.
He put together a team in the mid-2000s who pursued patents on a range of related topics involving something called a browseable fact repository, which would later grow into Google’s Knowledge Graph.
In the start of a patent granted at Google from this past September, we’re told that:
Implementations of the systems and methods for entity-based searching with content selection are described herein.
The phrase “content selection,” when it appears in Google patents, doesn’t often refer to site owners creating content on their web sites, but usually to the creation of advertisements and landing pages that people might create to show as search results or pages on their sites that are associated with those ads.
This is one of the first few patents I’ve seen from Google that ties together the Semantic Web and Paid Search, including one described in Barbara Starr’s article on Search Engine Land from last week, where she points to Google’s patent for Product Search and how queries and attributes related to products within those queries can be used by Google to help identify the appropriate pages to which searchers may be delivered.
When you optimize a site for the HTML Web and for the Semantic Web, you’re performing two different tasks that can complement each other, and both of them can be very helpful. But not if you forget whom you’re doing it for.
I had an opportunity to watch a Webinar a couple of weeks ago, and it was about using some software that looked at your messages on your pages and the words that you were using on landing pages and your advertisements, and suggesting semantically related terms to include in those landing pages and in your advertisements.
During the Webinar, we had the chance to ask questions, and I had noticed that the word “audience” hadn’t been mentioned once.
In 2003, a paper titled Semantic Search , was published by Ramanathan Guha of IBM, Rob McCool of Knowledge Systems Lab at Stanford University, and Eric Miller of W3C and MIT. Their stated goal in the paper was to take technologies such as web services and the Semantic Web and use them to “improve traditional web searching.”
This was written before Ramanathan Guha joined Google, started Google Custom Search Engines, created Google’s version of Trust Rank, and introduced Schema.
I’m working on putting together a history of the Semantic Web at Google, and this early look at Semantic Web provides some insights from at least one person who played a major role in how the Semantic Web is becoming part of search at Google.