Under this new patent, Google adds a diversified set of trusted pages to act as seed sites. When calculating rankings for pages. Google would calculate a distance from the seed pages to the pages being ranked. A use of a trusted set of seed sites may sound a little like the TrustRank approach developed by Stanford and Yahoo a few years ago as described in Combating Web Spam with TrustRank (pdf). I don’t know what role, if any, the Yahoo paper had on the development of the approach in this patent application, but there seems to be some similarities.
An authoritative user is a user of one or more computer-implemented services (e.g., a social networking service) that has been determined to be authoritative (e.g., an expert) on one or more topics that can be associated with one or more queries
I read the patent Tuesday, and thought to revisit it after reading a post this morning by Mark Traphagen at Moz, titled Will Google Bring Back Google Authorship? It’s a good question and Mark brings up a fair amount of evidence to support the idea that they might bring back the concept of author authority in search results, even if they don’t bring back or rely upon authorship markup (adding a rel=”author” to a link to your Google+ profile from a page you write at, or linking to pages you contribute to from your Google+ profile). As Mark notes:
One of the challenges of optimizing an e-commerce site that has lots of filtering and sorting options can be to try to create a click path through the site so that all the pages on the site that you want indexed by a search engine get crawled and indexed. This could require setting up the site so that some URLs are stopped from being crawled and indexed by use of the site’s robots.txt file, the use of parameter handling, with some pages having meta robots elements that are listed as being set as noindex.
If that kind of care isn’t performed on a site, a lot more URLs on the site might be crawled and indexed than there should be. I worked on one e-commerce site that offered around 3,000 products and category pages; and had around 40,000 pages indexed in Google that included versions of URLs from the site that included HTTP and HTTPS protocols, www and non-www subdomains, and many URLs that included sorting and filtering data parameters. After I reduced the site to a number of URLs that was closer to the number of products if offered, those pages ended up ranking better in search results.
I just returned from a few days in Las Vegas and the Pubcon Conference.
I had the chance to see some great presentations and talk to a number of interesting folks, and the company that I am the Director of Search Marketing at, Go Fish Digitalwon a US Search Award for Best Use of Search for Travel/Leisure, for a campaign we did for Reston Limo.
I wanted to share my presentation from the conference here as well.
A few years ago, I wrote the following about post about Google’s OneBox Patent Application I was brought back to it, with a new Google patent that looks at answering questions within similar answer boxes, and showing rich content, like in the example below:
A patent filed by Google a couple of years ago and granted today takes another look at Oneboxes, and includes this statement early on:
A search engine provider, Google Inc. of Mountain View, Calif., has developed an “answer box” technology, known as OneBox, that has been available for several years. Using this technology, a set of web search features are offered that provide a quick and easy way for a search engine to provide users with information that is relevant to, or that answers, their search query. For example, a search engine may respond to a search query regarding everyday essential information, reference tools, trip planning information, or other information by returning, as the first search result, information responsive to the search query, instead of providing a link and a snippet for each of a number of relevant web pages that may contain information.
A Google patent granted this week targets map spammers, who submit information about businesses to Google Maps, in a manner referred to as keyword stuffing.
The patent attempts to find words submitted by business owners as titles for businesses names that trigger a surprisingness value for combinations of words within a business title to determine whether a business listing is legitimate or fraudulent.
Traditionally, in Google Maps, the ranking signals used by business listings to include those businesses in search results depend upon their distance from a searcher, how prominent a business might be on the web, and how relevant the title for a business might be to the query used in a search to find the business.
When someone searches for a business. Google Maps may show off prominent businesses based on the searcher’s location. This patent targets people who might use that information to attract people to unrelated websites, by faking information in business listings. This patent targets people trying to take advantage of the use of well-known businesses located in a specific area:
Last year I wrote a post titled Google on Finding Entities: A Tale of Two Michael Jacksons. The post was about a Google patent that described how Google might tell different entities apart that shared the same name. The patent in it was filed in 2012 and granted in 2014. Google was also granted a new patent on disambiguating entities this week, which was originally filed in 2006. It is worth looking at this second one, given how important understanding entities is to Google.
It contains a pretty thoughtful approach to understanding and distinguishing between different entities within documents and queries.