Move over Google Author Rank, Make way for Google Authoritative Rank

Dr, Seuss
Ted Geisel, who wrote under the name Dr. Seuss, Authoritative for Green Eggs and Ham?

An authoritative user is a user of one or more computer-implemented services (e.g., a social networking service) that has been determined to be authoritative (e.g., an expert) on one or more topics that can be associated with one or more queries

This quote comes from a patent that was granted on Tuesday at the USPTO titled, Showing prominent users for information retrieval requests

I read the patent Tuesday, and thought to revisit it after reading a post this morning by Mark Traphagen at Moz, titled Will Google Bring Back Google Authorship? It’s a good question and Mark brings up a fair amount of evidence to support the idea that they might bring back the concept of author authority in search results, even if they don’t bring back or rely upon authorship markup (adding a rel=”author” to a link to your Google+ profile from a page you write at, or linking to pages you contribute to from your Google+ profile). As Mark notes:

Continue reading “Move over Google Author Rank, Make way for Google Authoritative Rank”

How Google May Use Schema Vocabulary to Reduce Duplicate Content in Search Results (Updated)

UpdatedI was checking up on this patent this week and noticed this statement about it: 2019-02-26 Application status is Abandoned. No explanation was attached to that decision, but it seemed challenging to implement. I think there still are many good reasons to include product schema markup on the pages of an ecommerce site. It looks like the process described in the patent I wrote about in this post isn’t something that Google decided to patent. My guess is that it isn’t necessarily an innovative approach to use, but practicably a good approach anyway.

One of the challenges of optimizing an e-commerce site that has lots of filtering and sorting options can be to try to create a click path through the site so that all the pages on the site that you want to be indexed by a search engine get crawled and indexed. This could require setting up the site so that some URLs are stopped from being crawled and indexed by use of the site’s robots.txt file, the use of parameter handling, with some pages having meta robots elements that are listed as being set as noindex.

If that kind of care isn’t performed on a site, a lot more URLs on the site might be crawled and indexed than there should be. I worked on one e-commerce site that offered around 3,000 products and category pages; and had around 40,000 pages indexed in Google that included versions of URLs from the site that included HTTP and HTTPS protocols, www and non-www subdomains, and many URLs that included sorting and filtering data parameters. After I reduced the site to a number of URLs that were closer to the number of products if offered, those pages ended up ranking better in search results.

Continue reading “How Google May Use Schema Vocabulary to Reduce Duplicate Content in Search Results (Updated)”

The Evolution of Search

I just returned from a few days in Las Vegas and the Pubcon Conference.

I had the chance to see some great presentations and talk to a number of interesting folks, and the company that I am the Director of Search Marketing at, Go Fish Digital won a US Search Award for Best Use of Search for Travel/Leisure, for a campaign we did for Reston Limo.

I wanted to share my presentation from the conference here as well.

Continue reading “The Evolution of Search”