Are You Trusted by Google?

Are you a robot? A spammer? A sock puppet? A trusted author and content developer? A trusted agent in the eyes of Google? (More on trusted agents below.)

When you interact on a social network, or write a review online or update information to an internet mapping service, how much does the service you are using trust the content that you add, or the changes that you might make?

These aren’t rhetorical questions, but rather ones at the heart of approaches from services like Google Web search and Google Maps, which are focusing more and more upon social signals and social collaboration to provide the information that they do to the public.

If you’ve seen a +1 button within Google’s search results or on a site, and you’ve clicked upon it, or shared a page or post or site in Google Plus with others, you’ve engaged in endorsing the work of the author who created that site. How much weight does Google give that endorsement?

If you find an error on a Google Place page, such as an incorrect phone number or bad street address, and you take the time to try to correct that, what process might Google go through to decide if you’re telling the truth?

Continue reading

How Automated Evaluations Might Help Decide Upon Rankings for Search Results at Google

A number of years back, I remember being humbled by a homework assignment crayon drawing by a friend’s son which listed what he was thankful for, and included his parents, his sister, and shoes that Thanksgiving. We take so much for granted that we should be thankful that we have. A few friends and I had gathered over my friend’s house, and we were all knocked somewhat silent by the picture when he proudly showed it off to his father. Thank you to everyone who stops by here to read, to learn, to share, and to add to the discussion. Thank you too, for the chance to share the things I find and the things that I learn from you all.

On Monday, I wrote about a recently granted patent from Google that described How Human Evaluators Might Help Decide Upon Rankings for Search Results at Google. Interestingly, this week Google was granted a patent that describes an automated method they might use to check the quality of specific sets of search results.

When Google responds to a searcher’s query, it presents a list of pages and other kinds of documents such as images or news or videos. The patent’s filing date is from before Google’s universal search but probably does a good job of describing something Google might do with web page based search results.

Continue reading

How Human Evaluators Might Help Decide Upon Rankings for Search Results at Google

A Google patent granted last week describes how the search engine might enable people to experiment with changing the weight and value of different ranking signals for web pages to gauge how those changes might influence the quality of search results for specific queries. The patent lists Misha Zatsman, Paul G. Haahr, Matthew D. Cutts, and Yonghui Wu amongst its inventors, and doesn’t provide much in the way of context as to how this evaluation system might be used. As it’s written, it seems like something the search engine could potentially make available to the public at large, but I’m not sure if they would do that.

In the blog post Google Raters – Who Are They?, Potpiegirl writes about the manual reviewers used by Google to evaluate the relevance and quality of search results by parsing through a forum where people have been discussing their experiences as reviewers for Google search results and collecting information about how the review program works. It contains some interesting information about the processes used by people who have been working to provide human evaluations for Google’s results, including a discussion of two different types of reviews that they participate in. One of those involves being given a particular keyword and a URL, and deciding how relevant that page is for that keyword. The other involves being given two different sets of search results for the same query, and deciding which set of results provides the best results for the query term.

A screenshot from a Google patent describing a framework for evaluation search results generated using different scoring weights.

Continue reading

The Integration of Social Media into Search Results and Rankings: Internet Summit 2011

I gave a presentation on SEO and Social Media at the Internet Summit 2011, in Raleigh NC yesterday, in an Advanced SEO session with Lindsay Wassell, Michael Marshall, and Markus Renstrom – head SEO of Yahoo! Daryl Hemeon has a nice write-up of the presentations at Advanced SEO – Internet Summit Day 2 Notes.

I included a number of links and references within the presentation that we didn’t visit or spend time on, for anyone who might want to visit those for more details. The basic premise behind my presentation was that Social Media has changed the expectations of searchers and the search engines have had no recourse but to change in response, and SEO likewise is evolving to meet those expectations.

Patent Filings from Google’s Acquisition of Apture and Katango: Highlighted Search, YouTube Sliders, and Intelligent Social Media Agents?

Imagine being able to highlight any text on a web page and search the Web based upon that text? Or an easier way to embed videos or other content in windows that will appear and open up without launching a new browser window.

Now imagine that your Google Plus Circles could engage in friend relationship management, being better at self organizing by grouping people whom you add to your Google Plus Account by whether they are co-workers, or if they live nearby, or the kind of company they work for, or the school that they went to or many other ways that might make circle management smarter and a little more fun. Now imagine that the technology behind that involves the use of intelligent social media agents that keep an eye on the social activity of your contacts.

Google revealed last Thursday that they acquired a couple of companies, seemingly both for the expertise and knowledge of the people employed by those companies and the technology that they have developed. I found the patent filings that have been assigned to both companies to try to get a deeper glimpse at some of the technologies that both companies have developed.

One of the companies that Google acquired is Apture, a business started by Tristan Harris and Can Sar, a couple of Stanford students in 2007. The Apture Website notes that the Apture team will be joining Google’s Chrome Team. That makes sense since Apture specializes in making browser experiences richer by proving text boxes that pop-out when you click upon links on a page. Apture was supplying these types of features for a number of partner sites as well as a plugin that would work with Chrome, Firefox, and Safari.

Continue reading

Google Patent on Displaying Breadcrumb Links in Search Results

Once upon a time, when you searched the Web at Google, the results displayed were limited to a list of 10 pages with page title, snippet of text from meta description or page content, and URL to that page. We’ve been seeing the search engines diversifying what they might display for certain pages, with special formats for things like forum posts, Q&A listings, pages that include events, and sometimes sitelinks or quicklinks to other pages as well.

The URL shown for some pages might have hinted at the structure of sites and locations of pages within a site hierarchy, if they showed directories and subdirectories within paths to pages. Some websites include breadcrumb navigation on their pages to show you more explicitly where you might be at within a site, and provide an easy way to visit higher categories. Google has started showing those breadcrumb listings for some pages, to make those listings more useful for searchers, and to make it more clear where those pages are within the hierarchy of a site.

An example from the patent of the search engine showing breadcrumb links in a search result instead of a URL for the page listed.

Continue reading

Google Granted Patent on Hostname Mirrors

For years the New York Times website was a great example I could point people to of a very high profile site doing one of the basics of SEO very very wrong.

If you visited the site at “http://newyorktimes.com/” you would see a toolbar pagerank of 7 for its homepage. If instead you visited the site at “http://www.newyorktimes.com/” you would see a toolbar PageRank of 9 for the site. The New York Times pages resolved at both sets of URLs, with and without the www hostname. Because all indexed pages of the site were accessible with and without the “www”, those pages weren’t getting all the PageRank that they should have been, splitting PageRank between the two versions of the site, and that probably cost them in rankings at Google, and in traffic from the Web. Google likely also wasted their own bandwidth and the Times returning to crawl both versions of the site instead of just one of them.

A few years ago, someone with at least a basic knowledge of SEO came along and fixed the New York Times site so that if you followed a link to a page on the site without the “www”, you would be sent to the “www” version with the use of a status code 301 redirect. The change ruined the example that I loved showing people, primarily because even very well known websites make mistakes and ignore the basics. It’s one of the things that makes the Web a place where small businesses can compete against much larger companies with much higher budgets.

Continue reading

Agent Rank, or Google Plus as an Identity Service or Digital Signature

What does it mean to call Google Plus an Identity Service? Might that have implications for how web pages might be ranked by Google? If we believe that Google might start incorporating authority signals into those rankings, it very well could.

At the Edinburgh Intl TV Festival on August 28th, 2011 at a Q&A with Eric Schmidt, Andy Carvin from NPR asked about Google’s insistence on the use of people’s real names, and received a prolonged response that pointed out the use of Google Plus as an identity service with the possible use of a ranking signal built into it.

But my general rule is people have a lot of free time and people on the Internet, there are people who do really really evil and wrong things on the Internet, and it would be useful if we had strong identity so we could weed them out. I’m not suggesting eliminating them, what I’m suggesting is if we knew their identity was accurate, we could rank them. Think of them like an identity rank.

The Return of Agent Rank and Portable Digital Signatures

Continue reading