Google TrustRank

Sharing is caring!

If you’ve ever heard or seen the phrase “TrustRank” before, it’s possible that whoever was writing about it, or referring to it was discussing a Yahoo/Stanford paper titled Combating Web Spam with TrustRank (pdf). While that TrustRank paper was the joint work of researchers from Stanford University and Yahoo, many writers have referred to it as Google TrustRank since its publication date in 2004.

While Yahoo has a TrustRank approach, Google does not have a similar approach. Yahoo TrustRank is aimed at identifying Spam on the Web. It has been patented, under the name Link Based Spam Detection. Because that Yahoo patent exists, Google could not be granted a patent that covers the same processes – the USPTO would not grant such a patent. However, there is a Google TrustRank.

The confusion over who came up with the idea of TrustRank wasn’t helped by Google trademarking the term “TrustRank” in 2005. That trademark was abandoned by Google on February 29, 2008, according to the records at the USPTO Tess database:

TESS search result for TrustRank showing a service mark claim abandoned on February 29, 2008.

But it appears that Google has come up with a system for reordering the rankings of web pages based upon a Google TrustRank. This is not similar to Yahoo’s approach because it is not a method to fight Spam the way that the Yahoo TrustRank is.

Does a Google TrustRank exist?

Google did not copy Yahoo’s TrustRank
because Yahoo patented the idea and could exclude Google from using their TrustRank process. It’s worth reading through both patents to understand how different they are from each other. Google TrustRank has to change the rankings of search results instead of finding Webspam like Yahoo TrustRank.

Last week, a patent granted to Google last week discussed how a TrustRank might be associated with people who apply labels to web pages through annotations while setting up a custom search engine. The idea of using annotations is kind of interesting considering Google’s recent release of Sidewiki – but there’s no sign from Google that Sidewiki and the user trust system in this patent are related.

Some of the ideas in Google’s patent from inventor Ramanathan Guha seem a little similar to a paper that he co-authored when he was with IBM – Propagation of trust and distrust (pdf).

The Google TrustRank patent itself is:

Search result ranking based on trust
Invented by Ramanathan Guha
Assigned to Google
US Patent 7,603,350
Granted October 13, 2009
Filed: May 9, 2006

Abstract

A search engine system provides search results that are ranked according to a measure of the trust associated with entities that have provided labels for the documents in the search results. A search engine receives a query and selects documents relevant to the query.

The search engine also determines labels associated with selected documents and the trust ranks of the entities that provided the labels. The trust ranks are used to determine trust factors for the respective documents. The trust factors are used to adjust the information retrieval scores of the documents. The search results are then ranked based on the adjusted information retrieval scores.

One idea behind the patent is that experts on many subjects may be found at many sites, whether upon pages allowing individual experts or commentators to express themselves within blogs and news outlets and similar sources or at sites where communities interact, such as forums and rating sites.

Some members of a site where people provide their opinions may be seen as experts, while others may be viewed as less informed or somehow biased.

Examples of indications of trustworthiness for some individuals participating at a site might include things like auction sites that might use ratings to identify trusted buyers and sellers. Forums might use membership criteria and other factors to distinguish between the amount of trust that different posters might be perceived to have.

If there were a way to “reflect” the trustworthiness of web pages or of commentary or opinions which might be associated with pages showing up in search result documents, this kind of reputation-based information might help provide more “meaningful” search results to searchers. That’s the point behind Google’s TrustRank.

The Google TrustRank patent itself goes into some detail on how the search engine might use information from annotations and labels from experts to re-order the rankings of search results in response to queries.

The Official Google Blog uses some interesting terms when discussing the recently released Sidewiki in their post Help and learn from others as you browse the web: Google Sidewiki. One common point between the two is how experts sharing their opinion of a site might be helpful to others who view that site:

What if everyone, from a local expert to a renowned doctor, had an easy way of sharing their insights with you about any page on the web? What if you could add your insights for others who are passing through?

Now you can. Today, we’re launching Google Sidewiki, which allows you to contribute helpful information next to any webpage. Google Sidewiki appears as a browser sidebar, where you can read and write entries along the side of the page.

But, one of the other projects that the inventor of this patent, Ramanathan Guha, has been working upon at Google is the custom search engines that people can build and add to their sites. In February of 2007, I wrote a post at Search Engine Land titled Google Customized Search Engines to Harness The Wisdom of Experts? on a series of five patent filings which listed Ramanathan Guha, the inventor. In that post, I noted that:

In short, custom search engines at vertical sites allow people to search using content sources decided upon and possibly annotated by the site owners.

Information collected from the source choices and the labeling and annotation of those sources, and the use of those custom searches may help inform results at other custom search engines involving related searches, and in query suggestions offered by Google on search results pages from regular Web searches.

The description of labeling and annotation of sources used in custom search engines fits in very well with the process described in the Google TrustRank patent.

It’s possible that Google may be learning about the trustworthiness of sites and people who annotate and label pages from many sources. They learn about those pages that may be used in a trust rank that can influence how pages may be ranked at the search engine. I wrote another post about the context files in Google Search Engines, and how the builders of those custom searches are considered topic experts, in the post The Expertise of Google Custom Search Engines vs. the Wisdom of Crowds

This Google TrustRank is very different from the TrustRank developed by the writers of the Stanford/Yahoo paper.

Next time you hear someone mention “TrustRank,” you may want to ask them if they mean the Google TrustRank or the Yahoo TrustRank. They are not the same thing.

Last Updated May 30, 2019.

Sharing is caring!

100 thoughts on “Google TrustRank”

  1. This might be very noisy though – fine for things like CSEs, but could this be meaningful in web search?

    The concept of determining the trustworthiness of authors would be very difficult to automate – just like the review sites used for local search are vetted, I imagine the same would be the case for all third party sites? Sidewiki might also be different, but that’s already inside Google’s walls.

    Interesting all the same.
    Rgds
    Richard

  2. Hi Bill,
    I’ve read your observations with growing repect and admiration for many years now.

    I particularly value your explorations and analysis of details, research, patents, emperical results, case studies, etc. Areas that indicate you can throw expertise into the equation of SE optimisation (a term I hate, by the way) and which makes you stand out from so many of the SE optimisation companies who publish self-agrandising rubbish.

    Regs,

    John

  3. Google mentions distrust and trust changes as indicators. More than “trust” analysis, “trust variation” analysis is on the road. Fake reviews, sponsored blogs, e-commerce trust network influence are pointed out.
    Trustworthiness is relative. I’m biased if I speak about of one of my customers but how to know that. And what if I speak of an ex-customers. Is changing trust concerns subject or author ?

  4. If you compare the trustrank-numbers from Open Site Explorer called moztrust to a lot of rankings you can see there is an growing importance of trust in rankings. So they already use it.

  5. i’ve got you Bill, but can you please tell me the time when Google updates their databases or we can’t say when they do this ? or it is continuous process ?

  6. Pingback: Google Trust Rank – new version | Tamar
  7. An interesting dilemma. If Google learns about the trustworthiness of sites and people who annotate and label pages (from a number of sources), and this info could be used as part of the trust rank to influence SERPs then how will newer sites that don’t show up be able to forge their way upward if they have no annotations.

    If your analysis of this is correct (and I think it is) then Google will be using Side Wiki to add another (possibly quite daunting) hurdle for SEOs. Being logged in with a Google account and commenting on sites through Side Wiki will have to be done by from different accounts, from different IP addresses (and different ISPs), possibly even from different geographical areas of the country – and comments made by the owners of those Side Wiki enabled Google account users will also have to have commented over time on multiple sites of different kinds, if what you speculate is true that “the trustworthiness of sites and people who annotate and label pages from a number of sources” and this becomes part of the Google algorithm.

    What needs to be considered is that a webmaster or SEO using Google Analytics on 12 of his sites already has his IP recorded by Google. Side Wiki comments from that IP are matchable by Google. So even if an SEO creates Side Wiki comments for 300 sites at random, if amongst those 300 all of his Google Analytics sites are included, Google would be able to identify that person’s Side Wiki comments as spam and could discount those comments by giving them less trust, or no trust at all.

    To my way of thinking “Better Safe Than Sorry” is the way to go on this. Cloud proxies or just plain anonymous proxy Google Account creations may be the way to go. Set up a proxy connection on a second PC and never log into any of the secondary Google accounts without going through the proxy. A cloud proxy might work best as every Side Wiki comment, in theory, would then be created from a different IP. One note of warning though, a lot of public proxies are compromised – so don’t ever transmit any credit card or banking information over them as the packets from your computer are being routed through the proxy.

  8. Pingback: » Das Ende ist nah… | seoFM - der erste deutsche PodCast für SEOs und Online-Marketer
  9. Pingback: When it comes to search results do you want relevance or trust? « 23Musings
  10. I believe trustrank has been in use for sometime now. Probably since the beginning there has been a nod by Google towards trust.

    Google is not only a search engine but they are very business oriented. They know that businesses will more often than not need time in operation to reach success or break even. As such new businesses need to pay their dues or earn trust by growing their business over time.

    This does not hold true for real world businesses only, but also for those online as well. Bill, Rand Fishkin, Aaron Walls or any of us can develop an auction site to compete with E-bay and there is little to no possibility Google is somehow going to say ohhh we need to rank the site at the top of the organic SEPR the first month it is in operation, just because a great SEO decided to venture into the foray. It would still need months of work in obtaining quality links (a.k.a. citations – another method to earn trust).

    Question is will we now see a Blue Bar replace the Green Bar of PageRank???

    Good stuff Bill!!!

  11. Hi Richard,

    Thank you – as always, some interesting questions and comments. It’s possible that this kind of information might be on the noisy side, but it’s also possible that some of that noise might be filtered out to provide useful information. What’s interesting is that if you set up channels where you can collect and view and examine this kind of trust data, it give you the opportunity to test it, and see whether or not it might make a difference.

    There was a nice interview with Google’s Udi Manber last week where he talked about search being about people rather than data. I liked this question and answer from the interview:

    Q: How do you determine that a change actually improves a set of results?

    A: We ran over 5,000 experiments last year. Probably 10 experiments for every successful launch. We launch on the order of 100 to 120 a quarter. We have dozens of people working just on the measurement part. We have statisticians who know how to analyze data, we have engineers to build the tools. We have at least five or 10 tools where I can go and see here are five bad things that happened. Like this particular query got bad results because it didn’t find something or the pages were slow or we didn’t get some spell correction.

    Determining the trustworthiness of authors might be difficult, but as you noted with sidewiki (as well as with the creation of custom search engines), the people who sign up and use those are known to Google, and the data from those is more under their control than say the buyer/seller ratings at eBay, or the reviews at epinions. I also wrote about a Google patent on an online rating process from Google that could possibly be integrated into a trust system that might influence rankings of sites, or help with the creation of search query suggestions. Sidewiki, custom search engines, and online ratings by themselves are useful and interesting. If data from those could be used to help deliver better search results, based upon signals of trust, that would be even better. It’s easy to speculate that the data collected might be noisy, but interesting to see how noisy it might be, and whether or not it could be filtered or utilized in a way that makes search better.

  12. Hi Frank,

    The amount of time that it takes Google to crawl and index content found on web pages is going to be based upon a number of factors, including how interesting, unique, and useful it might find those pages, as well as things like how many links (and quality links) might be pointed to pages of those site, and how easy or difficult it might be for a search engine to view the pages of a site.

    The interview with Udi Manber that I mentioned in my last comment tells us about Google’s desire to try to index information online as quickly as possible:

    If something is written on the Web that is important, we should bring it back to you in seconds. Right now we’re in minutes. Five years ago, it was once a month. We’ll try to make it faster and faster. Clearly we have the ability to do this. It’s getting possible. Now it’s five minutes, and everybody goes, “Five whole minutes? It should be five seconds.”

    Google has always seemed to have some kind of “incremental” database separate from its main database, where fairly recently indexed content would appear and be added to search results even if that content hadn’t been included in Google’s main database yet. I remember in 2001, seeing blog posts show up in search results within the same day, even though Google would only update their main index around every four or five weeks (what was referred to back then as the Google Dance). That update of Google’s “main index” is much quicker than monthly these days.

  13. Hi John,

    Thank you for your kind words – they are much appreciated.

    There’s so much value in looking at primary sources of information like patent filings and whitepapers, and blog posts from people at the search engines – I couldn’t imagine not paying attention to them, even if they may sometimes describe processes or methods that may never be developed. When I was reading through this patent, there was a lot of language about how any of the possible implementations that they discussed might be changed or modified or presented in a way that differs from what was included in the description.

    For instance, one of the things that they mention in the patent is a “trust button” that a site owner might be able to put on their site, that visitors could click upon to show how much they trust what they are reading. We may never see trust buttons on websites, but the patent tells us about other ways that the search engine might be able to start understanding how trusted or untrusted the content is that appears in a blog post, or news article, or review of a product. And they give us some insight into how that information might possibly be used in reordering search results.

    It’s fun when you do see something like a “trust button” start showing up a few months after writing about a patent, but the real value in looking at things like this patent is that they raise questions that can be explored, experimented upon, connected with other ideas, and discussed. And I enjoy it when people come here and offer comments and ideas and suggestions. So thank you for your comment, and I hope to see more from you in the future.

  14. Hi Bartjan,

    As Frank notes in his comment, I believe that Moztrust is modeled after the Yahoo/Stanford trustrank rather than the trust rank described in this patent. Instead of looking at links and linking behavior, the description of Google’s trust rank tells us about how annotations and labels and reputation measures might be used to associate documents (web pages, images, videos, etc.) with queries, without necessarily considering links.

  15. Hi Renaud,

    Those are some interesting questions. I’m not exactly sure what you mean by your last one (Is changing trust concerns subject or author?)

    The patent does tell us that it is concerned both about positive expressions of trust, and negative expressions, as well as bias. It doesn’t tell us how it might attempt to measure bias, but there might be some ways to attempt to do so. For instance, if someone’s annotation or label, or expression of how much they trust something is compared to what we see in terms of votes or annotations or labels from a wide range of people from many different places, with a certain level of history of providing reviews, or voting up (or down) certain annotations, or creating labels for pages, and that first person’s actions or expressed judgment is very different, then it might be potentially considered biased. Now that person’s other votes or annotations or labels may also be considered in determining if there is bias to see what their past history is as well.

    Regardless of how things like bias may be determined by a search engine, we are told in the patent that it is something that they are concerned about. How they come with with a way of filtering out bias would be something I would like to see as well.

  16. Hi Reuben,

    Good to see you. Thanks for pointing your post out. I’ve been wondering if we would see something from Google on “trust rank” for years.

    The “trustrank” from the trademark appeared to be something very different than what Yahoo was working upon, as an anti-phishing filter, and as Matt Cutts notes in the video in your post, it was a coincidence that the same term was used. This newly granted patent from Google was filed with the Patent office in 2006, after Yahoo’s paper had be around for a while, so the use of the phrase “trust ranks” in the paper is less likely to be a coincidence. I find that somewhat interesting.

    Both Yahoo and Google are attempting to look at how reputation and trustworthiness could be used to influence the ranking of search results, but the approaches are somewhat different. That’s not to say that Yahoo hasn’t also considered how annotations and reputation could be used in ranking results as well. I wrote a post about that called Social Trustrank and User Annotations as Anchor Text, where something described as “Dual Trustrank” is explored by Yahoo. There are a number of similarities in the approaches.

  17. Hi Suthnautr,

    I think that there are many benefits from creating annotations, and voting, and showing off your expertise without multiple profiles or IP addresses, and teaching people how to use those tools as well, so that they can use them as well.

  18. It was about time they did that…

    Pagerank has been devalued for quite sometime now

    I wonder whether they will keep pagerank or drop it….

  19. Hi George,

    You’re right. PageRank has only just been one signal amongst many that Google considers when ordering pages in search results, and I think Google has been making it pretty clear over the years that they look at much more than just PageRank. I do suspect that we will continue to see PageRank in the Google Toolbar for a while – it’s a part of the Google brand at this point.

  20. I think this is how Google plans on really cleaning up the search results over time and getting rid of pages that just don’t belong there.

  21. Hi Nick,

    Google has been collecting a lot of data related to how people search, how they link to pages, how they browse the Web, how they label the things they see online. It’s quite possible that the reranking of search results using something like a trust rank is a move towards doing exactly what you say – cleaning up search results.

  22. Let’s not forget that some people say that Google is collecting data from other services they offer for free as well like for exampel Google analytics…

  23. Pingback: Le TrustRank est mort! Naissance du Trust Rank…
  24. Hi,

    Thank you for this very inspiring article, which got me to write my own (ping above), based on the info you gave us.
    Now, I took the liberty to call this new development the PersonRank. It seems appropriate to say that we are all tagged by Google throughout all its tools. To qualify a person requires a profil, but we can imagine some kind of ranking within each profile; thus, the appearance of the PersonRank.

  25. Bill

    We have an age old point here – citations/references/authority and feel that the same point is
    punctuated in the patent. Same old wine in a new bottle! But the point here is the introduction of sidewiki seems to be helping in the SERP. Right, But I don’t think that there is any tangible factor behind this ranking process or my question is this going to be on a manual process of ranking pages based on the review & comments. How will search engines calculate the point of authority here?

    On a personal note, I feel it might play a major role in the re-ranking of say top 10-20 sites given a query in the search engine. We may never know!

    And finally yes like someone commented here, seobythesea is making waves in the Search Engine World!

  26. Bill

    Forgot on another point here? Will this alter the Page Rank calculation metrics of a site? Is Google Trust Rank (based on votes) influence the Page Rank of the site. Not a bad source of measurement after all!

  27. Pingback: Pre-Weekend Recap (week 43) – Onetomarket
  28. Pingback: The Weekly Insider 10-12-09 to 10-23-09
  29. Hi seotalk.gr,

    Google is collcting a lot of information from many different sources and services. It enables them to do things like build profiles for people, queries, and web sites. It shouldn’t be a surprise if one the elements of those profiles include some measure of trust.

  30. Hi LaurentB,

    Thank you. I enjoyed reading your post, though I’m now receiving an error message upon visiting that page. Hopefully you can fix the problem.

    From other patent filings that I’ve read, I do believe that Google will create profiles for people regardless of whether or not those people actually create Google accounts and then Google profiles.

  31. Hi Shameer,

    Thanks. There are a very wide range of signals that a search engine uses in determining the relevancy and importance of a page beyond PageRank, and its roots in citation analysis. Many of these are query dependent, such as the actual words and phrases that appear upon pages, and some of them are query independent such as PageRank.

    Googe has been collecting an incredible amount of data in other forms, and from other sources as well, and something like this Trust Rank approach goes beyond looking at publishers links to looking at a much wider range of activities of many people on the Web – real data about how people use the Web. I suspect that it’s not the only way that Google might be considering incorporting how it might use data it collects from the browsing, searching, and labeling tht people do on the Web, but I think it is something that we should be paying attention to.

    The patent refers to Trust Rank as something that can influence the re-ranking of pages, rather than being used in an initial ranking of pages. So, I don’t think that it was or is initially intended as a replacement for PageRank as described by this patent. But we have to keep in mind that PageRank is only one of possibly hundreds of ranking signals that Google is likely using at this point.

  32. I installed WP SuperCache. I believe people who visited my site, and come back have problems.
    I’ll try to tweak it, but it should go away with time.

    About non Google “logged in” people, we certainly see an increase of tracking within the SERPS. Is Google becoming a cookie pusher ?

  33. Hi Laurent,

    I’m still seeing a 404 page. Hoperfully the problem will go away with time.

    I’m not sure that there’s been a change in the number of cookies that Google has been handing out, but I am seeing more evidence in search results that Google is tracking some information about my previous searches and my location, and I see evidence of that for others in analytics as well.

  34. Hi Bill,

    I finally removed WP Super Cache… It should be fine now.

    Yesterday, I tried out the Mac version of Google Chrome, and almost got a heart attack with Little Snitch (firewall for outgoing trafic) going crazy about Google trying to connect.
    id.google
    dl.google
    toolbar.queries.google
    etc.

    After doing quick research, seems like Google Chrome cookie is pretty nasty stuff.

    The Google Browser is pretty much a malware!

  35. Hi Bill. Thought provoking as usual. Trust, as a concept, has been important to Google right from the start, in their never-ending battle against spam. The theory behind Trust Rank seems quite sound, but I’m not convinced the reality will measure up. Interesting times ahead for SEOs…as always.

  36. Hi Laurent,

    I can see your post now, and some very interesting discussion between you and visitors to your site in the comments.

    I tried Chrome when it was first released, and it slowed my computer significantly, so I stopped using it. I should probably do some of the investigation that you have with the messages that it sends out.

  37. Hi Bullaman,

    Thank you. Like most patent applications, there are plenty of warnings in this one that the processes that they might use may end up looking very different than the ones that are included in the patent itself. The idea itself, that user annotations and labels, and measures of trust can be helpful in re-ranking search results does sound like it could be helpful.

    The way that it might be implemented may not be what we might expect after reading the patent application – which is why I even considered mentioning the Sidewiki in this post – we just don’t know where Google will look to when collecting labels and annotations and measuring trust. What I might imagine would be interesting, from a search engineer’s perspective is finding ways that trust information might be useful and have a positive impact.

    For those of us who create and work on web sites, I think it doesn’t hurt to think about how we can show visitors that we can be trusted from what we present on our sites, and how we present it, as well as in annotations that we might leave on sites, or labels that we might apply to pages or images or videos.

  38. Hey Bill thanks for the awesome break down of the patents. I’d be curious to see if it gets incorporated with Google’s upcoming social search. Also, what would count as an annotation? Could a “Tweet” be used as a signal, or a share on a social network? As Google gives up some of it’s discretion to capitalize on outside signals, the fear of spam becomes even greater. I wonder how they’ll tackle that.

  39. Hi Samir,

    You’re welcome. From the demo video that I saw on Google’s social search, it looks like it could provide some signals that might be used in a trust rank process. I’m looking forward to learning more about it.

    There are some examples of annotations in the patent itself from outside sources, such as eBay recommendations, review and rating sites, and forum membership rankings, so it’s possible that Google could look to information that isn’t controlled by Google such as tweets or retweets as signals worth considering. Also, notice in SideWiki how you can vote “yes” or “no” on how helpful an annotation might be, or in Amazon reviews on whether or not you find a review helpful? Those kind of things might be considered in weighing the value of an annotation or label, as well as a history of annotations from a particular source and votes upon those annotations.

  40. Pingback: Why I Added Google Friend Connect
  41. Interesting article. Thanks. I wonder if the “Trust Rank” element is to play an increasingly important role in SERPs with Page Rank seemingly being pushed down the order of importance by Google?

  42. Hi Bill,

    TrustRank, could this be the official name or theory behind the Vince update at Google. If they are going to give brands better coverage in the SERPS, then surely there is a solid reason, in this case “trust”.

    On that matter, and veering slightly off topic, I’ve noticed that sites I’ve been running AdWords for tend to generate better volumes of organic traffic too. Coincidence, I think not. After all, doesn’t a site that spends money on advertising seem more trustworthy than one that doesn’t? Heck, I know that a conman would spend $1 to make $5 or even $10… or more. But would an algorithm?

    I think we’re in for some very interesting times.

    Final note. Thanks yet again for a great post.

  43. Hi SEO Midlands,

    I do believe that signals from user data that Google has collected in places like click logs and query logs is growing in importance. PageRank is just one of many ranking signals that Google looks at, and that’s been true for quite some time. I don’t think it’s a matter of TrustRank surplanting page rank as much as it is TrustRank being used to re-order results that you might see from the search engine, after an initial ranking using signals like PageRank.

  44. Hi Robert,

    Thanks. Interesting questions.

    I don’t think that TrustRank is behind the change that many are referring to as “Vince.” According to Matt Cutts, that update was the result of the work of someone at Google who is referred to by the nickname Vince.

    In this case, trust is referring to a reputation score for people who are annotating or labeling pages, and a possible association between those annotations and labels with query terms, and how well those query terms might match up with pages or other content that might be annotated or labeled. That doesn’t necessary seem to favor brands one way or another.

    I don’t believe that there is a connection between the use of adwords and organic traffic. For many sites, the use of adwords may not be necessary or appropriate, and it’s questionable whether there is a correlation between advertising and trust. A cost/benefit analysis used to determine the value of advertising is one predicated upon issues such as a return on investment through the use of paid advertising, rather then reputation building. There are many sites online that do very well in terms of traffic and reputation that don’t engage in paid clicks.

  45. Pingback: Suchmaschinen & SEO – Oktober 2009 - Inhouse SEO
  46. This goes to show everyone that they need to focus on doing everything in a high quality format. Including high quality links, high quality content and more.

  47. Hi Joel,

    I think focusing upon high quality is never a mistake. How much someone might trust a source may be impacted by the quality of content, but sometimes the best sources of information on a topic might not be the best designed web sites, or have the most links pointing to them, or even high quality links. That might especially be true for pages or sites that focus very narrowly on a specific topic, and don’t necessarily possess main stream appeal.

  48. Interesting info, I’m glad the internet “controllers” like Google are pursuing technology to give high quality content more weight.

  49. Hi Clayton,

    I think the search engines have no choice but to change with the evolution of the Web. There are more sites than ever that offer user reviews and annotations and other interactions with web sites, and with social networks.

  50. Hi Bill!

    Thanks for the excellent article. I am having trouble getting my head around all the implications discussed here, but I think one thing is very clear, we can’t yet see them plainly enough to formulate any policy for dealing with them, other than the obvious method of gaining ‘trust’ – i.e., offer value in terms of pertinent, accurate, backed-up info, and cite ‘trustworthy’ sources! In other words just do it the way Google has always asked us to!

  51. Hi Rhys,

    You’re welcome. I think that’s a good approach. What else seems to be important here is to be seen as a reputable expert on the topics that you write about, whether you are the author of a post or page or article or annotation.

  52. Pingback: Strongwords Blog » New in Search and SEO
  53. Hi Jenny,

    No, I believe that Google is still using PageRank at this point. But, there’s always been much more to how Google ranks pages than just PageRank.

  54. Pingback: Are You Working The RDFa Framework Into Your SEO Campaigns? - WebProWorld
  55. I don’t think page rank is a factor in rankings. it sure is an indication of your links relevancy and trust. And i think Google likes trust. Does yahoo ever had a Trust rank thing? i think yes but im not sure.

  56. Hi Patrick,

    I believe that the role of PageRank really hasn’t changed too much over the past few years, but Google is likely using a wider range than ever of different signals to rank pages. PageRank is a measure of the perceived importance of a page, but not the relevance.

    Yahoo has a few whitepapers and patent filings that involve a different kind of TrustRank – one that involves analyzing the links between sites, and trying to understand how pages are linked together – weighing links from legitimate and valuable sites differently than links from sites engaging in Web spam. One of the first papers from them on TrustRank is Combating Web Spam with TrustRank.

  57. Whether it’s PageRank and/or TrustRank, I think it’s all down to how relevant the pages are to both on and off page connections. Google is continually changing and will always try the next step or algorithm to keep it interesting for us. Thanks for the great info – it’s nice to be able to read something clearly like this. Cheers.

  58. hi Bill,
    I like what you said in a comment above about page rank, “PageRank is a measure of the perceived importance of a page, but not the relevance.”
    I understand how one can calculate page rank, because it should be objective math. The problem with page rank, as you point out, is that it only indicates importance (if even that) but not relevance.
    As to “Trust Rank”, it seems too subjective, and I don’t see how a computer could (reliably) calculate it…
    Still, if anyone can do it, that would be Google. 🙂
    A long but interesting read here. Steve

  59. Hi Paul,

    You’re welcome. Relevance is only part of what the search engines are using to rank pages. PageRank has nothing to do with relevance at all. Trustrank is also a measure of importance or quality rather than of relevance. Of course search engines will try to show relevant results for queries, but they also attempt to consider how material, or important, a page might be when they rank those pages to determine what order to show them in.

  60. Hi Steve,

    One of the difficulties of coming up with an algorithm that attempts to define something like trust, and use a trustrank is that you do need to try to come up with a mathematical way of measuring, and develop criteria in determining the impact of how that algorithm will work.

    It’s possible to say that any algorithm is somewhat subjective, in that it’s designed by people based upon assumptions that they make as to how importance (PageRank) or trust (expertise or trust) might be defined.

    The patent does go into some of the math and some of the assumptions behind calculating a trust rank, and I tried to present a fairly high level overview, but it does come down to trying to compare how important or authoritative the people are who are making annotations, and to give more weight to the labels that they apply to a page when ranking that page.

  61. Thanks Bill, some really useful stuff here…I have heard this before but not from an SEO authority. I try getting high pr links but have always known…

  62. What needs to be considered is that a webmaster or SEO using Google Analytics on 12 of his sites already has his IP recorded by Google. Side Wiki comments from that IP are matchable by Google. So even if an SEO creates Side Wiki comments for 300 sites at random, if amongst those 300 all of his Google Analytics sites are included, Google would be able to identify that person’s Side Wiki comments as spam and could discount those comments by giving them less trust, or no trust at all.

  63. Hi mivpljaipur,

    I really wasn’t focusing upon the spam aspect of this approach, or how to attempt to spam Google by using annotations like Sidewiki. But, a search engineer does need to be careful when coming up with some kind of ranking or reranking approach, to anticipate how that approach might be abused.

    Google likely does record IP addresses when they are used in a number of resources that they provide, and it also requires someone to log into a Google account when using something like Sidewiki or Google analytics. If someone has multiple Google Accounts, it is possible that the search engine is aware that some accounts may be held by the same people, or by people who use the same router or computer or connection to the Web. That IP address may be something that Google looks at when calculating trust, but it’s quite possible that it’s not the only thing.

  64. Oh well I guess all the ‘get links from edu & gov sites!’ has something to do with this patent lol. But i actually thought that Yahoo was the one that originally came out with the idea or a trustrank. Do you know if it’s true Bill?

  65. Hi John,

    Links from .edu and .gov sites really don’t have anything to do with Google’s Trust Rank. Yahoo did originally come out with something called Trustrank a few years ago, but it is very different from Google’s. I do explain that at the start of my post.

  66. Dear Bill,

    Thanks for clearing up what is still a very murky, unrefined entity in SEO. Had a great deals more questions than answers before I read this.

    Matthew

  67. Bill

    this is something that relates incredibly well to a living organism. The internet becomes an animal. Sounds funny…but in fact it is more true than funny. Google’s robots are so well designed that they can tract the actions of its users. It somehow takes honesty and puts it on a pedestal.

    I want to read more on this topic, as I am also a business owner. it is better to work with Google rather than against it, wouldn’t you agree?

    thanks mate, James

  68. Hi James,

    I guess you could apply a number of models based upon biology to the Web, and to the people, and pages and applications we see there. Symbiosis is a good example of an analogy that could be used to describe how search engines and web sites might interact together.

    Google has the potential to be one of the best visitors to your website, not because they might buy something from you, or engage your services, but rather because they may potentially share what they find on your site with a considerable number of people. There’s a lot of value to that.

  69. Hallo Bill,

    excuse my bad English, but are you saying that Trust is more important then backlinks from High PR sites? Or do these go together. I do not do SEO but try to make my site friendly to Google & Yahoo – could you tell more how to get a more trust rating?

    Many Thanks for your great info.

  70. Hi Peter,

    Under this patent, Google might evaluate how trustworthy people are who label or annotate websites, whether through something like Google Custom Search or some other method (like a social network tag or label of some type).

    The author of the patent is the person who spearheaded the development of Google Custom Search Engines, and it’s possible that Google may use information from the labels and annotations for websites within those custom search engines to influence the rankings of some web pages for certain queries, and to develop query suggestions that are shown on search results.

    Trustrank isn’t the measure of trust for a page itself, but rather how trustworthy the people are who are providing labels for pages. The more trustworthy they are, the more weight their labels may carry.

  71. Pingback: Google’s Heading for Life after Link Trust – Here’s How to Prepare - SEOgadget.co.uk
  72. this is pretty much the basis for google rankings and serps am I correct? This is also probably the basis for some of the the Panda 2 update as well. The possible weight on the plus one’s… retweets, perhaps facebook likes. SEO is having a huge transition at the moment, page one is becoming more and more competitive.

  73. Hi Joe,

    It’s possible that Google is exploring how to incorporate reputation scores for different users, and annotations from things that they write on the Web into Google rankings and search results. I don’t know if that is behind some of the Panda updates, but I suspect that there are a lot of people at Google exploring how user-information data can work together with algorithms that focus upon on-page and off-page features of websites to improve the quality of search results.

  74. Hi Bill. Great post with awesome resources and analysis. How do you think that the Google consolidated privacy which went live today (and the corresponding integrated user profile many believe Google will be building as part of this services consolidation) will impact Google Trust Rank for Google users if at all?

    As Internet marketers, from a practical perspective, we can’t avoid using Google services (Webmaster Tools, Google Places, Adwords, Analytics, YouTube, Google+, etc.) and thus will likely be logged in most/all of the time, providing Google a wealth of information that could be used to build a Google user trust profile. What are you thoughts on this?

    Thanks for sharing.

  75. Hi Rick,

    Google’s trust rank involves Google considering the reputation and credibility of people who label or annotate pages or content. If people are logged into Google services, it’s possible that if they do engage in activities like plussing a page, that may have a positive impact upon what they are adding a +1 to, or at least upon the reputation of the person who created that content.

    But the fact that Google consolidated their privacy policy might have little impact upon trust rank as a whole. I’m not sure that consolidation did too much to how Google follows their privacy policy for the different services they offer as much as it brings it into one place. If Google search wasn’t using things like analytics information in the past to influence the rankings of web pages, I don’ think they will suddenly start because of the change to their privacy policy.

Comments are closed.