The New PageRank, Same as the Old PageRank?

Sharing is caring!

When a judge writes a judicial opinion upon a case, he often includes more than just his ruling on the case. It usually contains an analysis of the present law, the legal atmosphere, and how the ultimate holding on the case was arrived at. Those written rulings can also include some legal opinions on issues that don’t necessarily play an essential role in the outcome of the case at hand, and those are often referred to as “dicta.”

When you read a patent, you’ll see that it’s broken into a number of parts. The most important of those is the claims section, which is what a patent examiner focuses upon when prosecuting a patent, and deciding whether or not it should be granted. There are also description sections in patents which give a richer and more detailed look at how the technology behind a patent might be implemented (with emphasis on the “might”). Often those descriptions include material that isn’t reflected within the claims section of a patent, and in many ways, those description sections could be considered as similar to the dicta that I mentioned sometimes appears within judicial opinions.

Stanford University was granted two new patents today under the name, Scoring documents in a database, both of which were filed at the United States Patent and Trademark Office on January 19, 2010. These two patents, assigned to Stanford and listing Lawrence Page as inventor, are described as continuation patents of the following patents assigned to Stanford which focus upon PageRank:

The Old PageRank Claims

The claims sections of the new patents make some very interesting things about PageRank much clearer than the older patents.

If you compare the description sections of the new and the older versions of these patents, you’ll notice that they are substantially the same, with a few formatting differences between the patent filed in 1998, and the two new continuation patents, and a couple of paragraphs that are slightly different from the first and last two patents and the patents filed in 2001 and 2004.

Where the main differences appear are in the claims section of each of those patents. The 1998 patent covers many of the topics mentioned in the description section of the patent, but in a very general manner. For example, we see the following claim in that patent:

4. The method of claim 1, wherein the assigning includes:

identifying a weighting factor for each of the linking documents, the weighting factor being dependent on the URL, host, domain, author, institution, or last update time of the one or more linking documents, and

adjusting the score of each of the one or more linking documents based on the identified weighting factor.

The 2001 patent provides a very brief two paragraph claims section that doesn’t address very much of the description section, The 2004 patent claims section focuses more specifically upon how PageRank might be calculated between pages without looking at different weighting factors like the first patent.

The New PageRank Claims

The new PageRank patents are:

  • Scoring documents in a database (US Patent 8,131,717) Filed January 19, 2010, granted March 6, 2012, assigned to The Board of Trustees of the Leland Stanford Junior University
  • Scoring documents in a database (US Patent 8,131,715) Filed January 19, 2010, granted March 6, 2012, assigned to The Board of Trustees of the Leland Stanford Junior University

Here are some of the things that appear in the Claims sections of these new patents that aren’t in the old ones, even though they are referred to in the descriptions for those patents:

Personal Biasing of PageRank – Scores associated with web pages that someone has bookmarked or has indicated is their home page might be higher for that person when they search when applying a personalized approach to PageRank..

Links on the Same Domain – These links might possibly be ignored when scores for pages are calculated.

Links on the Same Server – These links might be ignored when scores for pages are calculated, or might be weighed less than links from different servers.

Geographic Locations – Ranks might be increased for pages with back links from other pages that are created by different authors in a number of geographic locations.

Links from Home Pages – A link from the root page, or home page, of a domain might be given more weight than links from other pages.

Links from more recently modified pages – A link from a page that has been more recently updated might carry more weight than one that hasn’t been.

Text in Links – If the text in a link pointing to a page matches or is associated the query term used to find that page, the score for the page might be made higher.

Text Adjacent to Links – If text adjacent to a link pointing to a page matches or is associated with the query term used to find that page, the score for the page might be made higher.

Missed Ranking Signals in Claims

The patent descriptions point out some other issues that still don’t seem to be addressed by the claims within any of these patents. That doesn’t mean that Google might or might not be using them. They include:

Relative Importance of a Link within a Document – Highly visible links near the top of a document might be given more weight.

Real Usage Data – The description also tells us that when real usage data is available, it might be used to help provide a “more accurate or comprehensive picture.” None of the PageRank patents, old or new, include information within their claims about how such usage data might be used.


Google’s Reasonable Surfer patent addresses some of the issues that aren’t covered in the claims to these PageRank patents, such as the “relative importance” of a link on a page, by looking at features associated with the links themselves (font size or color or type, actual text in the link, whether the link text is commercial, aspect ratio of image links, etc.), features associated with the page that the link appears upon, features associated with the page targeted by the link, and also usage data associated with those links and pages.

Google had a permanent license to use PageRank for the life of the original patent, and an exclusive license to use it that was set to expire in 2011. It’s possible that Stanford and Google may have renegotiated that timeline. Researchers at Washington State University recently applied an algorithm similar in many ways to PageRank to try to understand chemical behaviors, but it’s not clear that’s an indication that the exclusive agreement between Stanford and Google has ended.

It’s possible that these new patents may signal a continuing agreement between Stanford and Google to keep PageRank between them. Lawrence Page is no longer a student at Stanford like he was when he originally worked on the algorithm which is named after him.

Many of the factors that are now in the claims from the two new patents were described in a much more general manner in the claims of the original patent, and these new patents were possibly filed to make their use more clear. It is possible that Google had been using something like them from the earliest implementations of Google, and quite possibly has moved on since then. A recent statement from the Google Inside Search blog noted that Google was retiring a link analysis method that had been in use for a number of years and my last post, 12 Google Link Analysis Methods That Might Have Changed, contained some possible candidates.

Also keep in mind that just because the claims in these new patents state things like links between pages on the same domain or the same server might be weighed less or ignored when it comes to the calculation of PageRank, doesn’t mean that they are being given less weight or ignored, or that they might be in the future. Then again, if Google thinks it can provide higher quality results by making changes to how it ranks pages, it might do something like impose some limits on how much PageRank is passed along between links on the same domain.

Google has also published a good number of other patent filings as well that describe how they might use other link analysis approaches to ranking search results differently when it comes to personalization, weight and relevance of anchor text, common ownership of different domains, and more, and the methods described within those may supersede how Google uses the kinds of factors listed above.

Is the new PageRank different from the old PageRank, just explained better? Or did the exercise of updating and expanding the claims within the continuation versions of these new patents trigger a change in Google’s approach to link analysis?

Sharing is caring!

61 thoughts on “The New PageRank, Same as the Old PageRank?”

  1. Curious about your opinion Bill: do you think the “Links from more recently modified pages” means that if you update an old post to insert a new link, all links from that old post gain more value?

  2. That is a good question Joost but without any SEO involved who normally goes to an old page to add new links? That to me doesn’t sound like an average user/author’s behaviour?

  3. @Yousaf you can easily update an old post by adding new content. There are enough situations where you can apply this tactic. I can agree on the fact that average webmasters without SEO goals will write a new post instead of updating an old one. I don’t think Google will increase linkvalue from posts in which only one link is changed or added. You’ll have to change the publication date or add a paragraph of fresh content.

    @Yoast: I think they are comparing two identical links from two pages within one domain, comparing the freshness of both pages and adjusting the linkvalue according to that.

  4. Some very interesting points there, some which come at no surprise though…
    However, could you explain how you see “Text Adjacent to Links” play out and why this is deemed to be important… I’m somewhat confused about this.

  5. Hi Bill!
    Links from home pages might be given more weight… It seems that this type of links quite often are paid links. Any opinion about that?

  6. About Real Usage Data: If we think Random Surfer Model in a website that has Google Analytics, this can mean that the weight of one external link (expample: link to page_A) is related to the CTR of that link to the CTR of total external links.

    So, Weight(link point to page_A)=CTR(link point to page_A)/CTR(all external links)

    What is your opinion about that?

  7. hello..
    Please note that the PageRank of a given vertex depends on the PageRank of all other vertices, so even if you want to calculate the PageRank for only some of the vertices, all of them must be calculated. Requesting the PageRank for only some of the vertices does not result in any performance increase at all.

  8. @Joost It seems unlikely that all of the links on the modified page would receive more value. If the freshness of links is being monitored, it’s far more likely that each individual link would have a timestamp. Thus, when you add a new link to an old post, it would be fresher than the others (and receive a boost, assuming Google is actually incorporating this as a link analysis signal).

    In response to the original question, I think this patent is simply a more thorough explanation of certain aspects of PageRank (to make it easier to defend in court against competing implementations). If you own a general patent as well as more specific patents that itemize potential implementations, it gives you a much stronger legal position.

    Additionally, many of the signals in the patents in question were well-known and actively studied in the research community circa 2003-2007 (e.g., personalized PageRank was hot back in 2003-2005, same domain link dampening (aka SourceRank) was hot back in 2006-2007, etc.) so I’m sure Google was already experimenting with them back then (if not sooner).

  9. I love this claim; “Text Adjacent to Links”. It seems to me that once this is the exclusive weight factor, spam will become less effective. However, it appears that “Text in Links”, more frequently called anchor text, is what is still dominating when to comes to rankings for particular queries. I’m not an expert, but what would it mean (aka, would it blow up the whole game)/ how hard would it be for Google to eliminate anchor text as a ranking factor?

  10. The links on the same server is a new or newish claim to me. I always have worked on it being the class C part of the IP as opposed to it being the same server.

  11. I’m also curious about Joost’s question. Seems like it might with an updated sitemap. Also, I’d like to read more about the pagerank in plain English? Any additional suggestions? I’m sort of bottoming out it seems.

  12. There are two conflicting choices for ideal link placement:

    a) Links from Home Pages – A link from the root page, or home page, of a domain might be given more weight than links from other pages.

    b) Text Adjacent to Links – If text adjacent to a link pointing to a page matches or is associated with the query term used to find that page, the score for the page might be made higher.

    Would you rather get a link on a home page which typically acts as a hub to multiple topics or from a deeply focused internal page?

    I would go for a page as it would provide greater editorial value to my link.

  13. Thanks for another interesting post. I really don’t get how these things are considered patentable. They seem obvious to someone paying attention even 10 years ago. Since I am not in the business of doing this stuff I am not sure exactly how to go about measuring these things but the “new ideas” are the type of things I thought of 10 years ago. I guess if they have novel ways of accomplishing the measures it might make sense to be patentable (in general I find out state of our “intellectual property” rules crazy).

    I know Google has some difficulty dealing with people trying to game their system, but frankly I am not impressed with the progress they have made in the last 10 years. They still seem superior to the competition (so they have that defense) but really I would hope for more. I hope more competition for searching comes forward.

    I do think Google has tons of great people doing great stuff. I just wish for more – especially on some basic usability (search and other products).

  14. Interesting read. The links from sites on the same server has always been a suspicion of mine, but it’s nice to see it written down.

    Thanks for the post.

  15. @Jan-Willem Bobbink I know that technique is being used/abused by many news sites in fact some of their CMS are programmed in a way that by a click of a button title tags on relating stories get updated along with the first paragraph of the articles – good way to exploit QDF.

  16. @Dan Petrovic – I would go for the home page link every time. Homepage PR & authority (as long as its broadly on topic) will beat a more specific but less authorative page IMHO.

  17. So basically, if I am doing web design and hosting, and a client contacts me and wants both of these services from my company, but the client has a high pagerank and I move them to my server, put my link on their page, my PR will go down because both sites are not in the same network range/IP address and geographical location?

  18. In my opinion, all changes to PR calculation make sense and must admit that most of us thought such factors were taken in consideration already even in the “old” PR formula.

  19. Bill, this is quite a new interesting stuff I came across. Recently one of my clients was quite interested in promoting his blog post link, but somewhere I felt that it wouldn’t be a good activity from Google’s point of view, so I suggested him to not to take that step. And today, your post has taught me that “A link from the root page, or home page, of a domain might be given more weight than links from other pages”. Very good lesson for me. Thanks for sharing.

  20. These two claims:

    1. Links on the Same Server – These links might be ignored when scores for pages are calculated, or might be weighed less than links from different servers.
    2. Geographic Locations – Ranks might be increased for pages with backlinks from other pages that are created by different authors in a number of geographic locations.

    Are really interesting, I work with quite a few web design companies who naturally add a link back to their website from the site’s they design, they also host these websites, and most of their clients are from the same geographic region – their PageRank score is around 3-4. I also work in the hospitality industry and the sites here are linked from sites from not only in the UK, but also Europe, the links are also from different servers – these sites PageRank score is around 5-7.

    Not scientific, but can maybe help explain sticking points.

  21. I’ve been trying to get a handle on the “new” pagerank, so this was a great read for me. I guess I never realized that google judges a link based on adjacent text as well as the actual link text… Makes the content of the leading article all the more important doesn’t it!

  22. As always great article. Thanks so much for posting all the helpful links. To answer the final question:

    Is the new PageRank different from the old PageRank, just explained better? Or did the exercise of updating and expanding the claims within the continuation versions of these new patents trigger a change in Google’s approach to link analysis?

    IMHO: Yes, it presents a very different approach to evaluate the importance of websites based on the use of links. Google didn’t always have the capabilities to rank pages based on personalized behavior as it does today. With the increased popularity of Chrome, and more and more signed in users agreeing to the new terms and conditions recently posted about how Google can use the data it collects about them. I strongly believe that the exercise of updating the claims has less to do with the timing of how Google algorithmically evaluates links, and even if it did, many small adjustments over time make it difficult to attribute which of these factors is dialed up or down and when. To me, the timing of the publication is less of an indicator of this shift towards personalization in weighting links. What is more valuable to me.. and i’m so glad your keeping us apprized of these patent filings, is that all of these must be valuable signals on some level otherwise they would not be incorporated into the patents.

    Great article!!

  23. What I don’t understand is why does google get US Patents on stuff like this. Why get a patent at all?

  24. Hi Joost,

    The patents don’t specifically mention the kinds of changes that might be considered modifications. The patent description does tell us that:

    In many cases it is appropriate to assign higher value to links coming from pages that have been modified recently since such information is less likely to be obsolete.

    The claims section of the 8,131,715 patent also tells us that:

    24. The computer-readable medium of claim 16, where the model of the surfer is designed to weight a first link from a first one of the identified documents more than a second link from a second one of the identified documents, where the first link is weighted more than the second link based on the first one of the identified documents being modified more recently than the second one of the identified documents.

    Now if someone inserts a link to a casino in a blog post about motorcycles, I don’t think that modification makes the blog post less obsolete, so just an update like that really doesn’t meet the intention behind the patent. We aren’t given more in either the descriptions or the claims though. But just a look at the last time a page or post was updated doesn’t seem enough. I would suspect that there would be more analysis of some type involved.

  25. Hi Yousaf,

    It should sound old. The patent description from these two patents filed in 2010 are almost exactly the same as the one filed in 1998. But the updated claims in the newer patent are more specific as if Google might have found a way and a reason to take some of the idea from the first version of the patent and implement them.

    I’ve returned to some older posts when there was additional information and I wanted to address the changes. I would have done that for those posts regardless of whether I was an SEO or not. I imagine that there are many pages on the web that could do with being updated, and a fair number of webmasters who actually do add new content, or links to new sources of information.

  26. Hi Jan-Willem,

    Interesting points. If I write a new post about a topic, and I return to an old post and add a link to the bottom of that old post, I have actually added some value to it, and made it less obsolete by providing a way for people interested in the topic to find updated information. But I agree that adding new content to the post would probably meet what seems to be the spirit of the patent better. The truth is though, we don’t know what kind of analysis might take place.

    I’m not sure that the patent is referring to two identical links from two pages on the same domain, but the claim regarding that particular aspect of this patent probably could be more clearly written.

  27. Hi Piperis,

    PageRank is often described as a query independent ranking signal, where pages might be ranked independently of query terms that they might be found for.

    But what if, instead, PageRank might change based upon the query used to find it, so that if I do a search for a particular phrase or term, and a link pointing to a page that might rank for that term includes the query term or phrase, or text adjacent to (or within a certain distance of that link) that link includes the query term or phrase, then the value of the PageRank for that link to the page might be adjusted a little higher.

    For example, I search for [enormous red widgets], and a number of pages are returned as a result of my search. One of those pages has a link pointing to it that contributes to its final ranking that uses the anchor text “enormous red widgets,” and because of that, its PageRank score might be a little higher, causing it to rank a little higher in the final search results. Or, instead of using that anchor text exactly, it might use the anchor text “here” as part of a sentence that says: “see enormous red widgets here” with just the word “here” as a link. Because the phrase “enormous red widgets is adjacent to that link, the value of that link might also be boosted a little.

  28. Hi Steve,

    As I responded to Joost, the patent mentions “modified content,” and doesn’t limit that to links.

    I agree with you that by adding continuation patents that further explain, and in some cases, update aspects of the technology that might have been described in an earlier patent, Stanford (and by extension as a licensee, Google) ends up strengthening its legal stance in case of challenges.

    The original patent and its description have been around for a good amount of time, and I hope that people were looking to it to experiment with some of the different things described within it. Sometimes some of the things that we see in patents are more aspirational than implementable, and may need to wait for supporting technology to be developed before they can be used. It’s possible that the kind of increased speed and capacity that came with an update like Google Caffeine might have made some of the ideas discussed in the original PageRank description capable of being implemented.

  29. Hi Matt,

    I’m not sure that there’s too much difference in looking at “text adjacent to links” as opposed to text within links.

    When it comes to anchor text as a ranking signal, I don’t think that Google would eliminate its use if instead it could find ways to eliminate abuse of anchor text instead.

    For example, Google might give less weight to hypertext relevance from links on the same site as the page being linked to, or on pages that might be seen as affiliated with that page, such as being under control of the same owner. Google might implement an approach from their phrase-based indexing patents that gives more weight to anchor text that might be relevant to the content on the page being linked to, or using “related” terms that might be words or phrases that tend to co-occur with some frequency on top ranking pages searches for queries for terms that show up on those pages.

  30. Hi Jason,

    There’s no reference within any of the PageRank patents that refer to IP addresses or to different classes of IP blocks. I’ve seen a lot of discussions surrounding that on SEO and webmaster related forums, but never within a patent or whitepaper from Google.

  31. Hi Thimios,

    What the descriptions to the patent say about usage data is:

    Real usage data, when available, can be used as a starting point for the model and as the distribution for the alpha factor. This can allow this ranking model to fill holes in the usage data, and provide a more accurate or comprehensive picture. Thus, although this method of ranking does not necessarily match the actual traffic, it nevertheless measures the degree of exposure a document has throughout the web.

    There are a number of patents from Google that describe how they might collect and use data collected from users related to searchers, to clickthroughs in search results, to the browsing of web pages and collection of data through toolbars, to the saving of favorites, to the printing of documents, to the distance that someone might scroll down a page, to tracking mouse pointer movements on search results pages, to information about query refinements, to dwell time on pages, and so one.

    These methods don’t limit themselves to clickthroughs, and don’t mention Google Analytics whatsoever. Google analytics was built to be an analytics tool for webmasters to learn about traffic to and through their own pages, and only is in use on pages that include the analytics code on their pages. As such, it would really be inefficient for Google to rely upon Google Analytics code to track and monitor usage data for use in ranking web pages, and they would miss out on a lot of data from sites that didn’t use it.

    I don’t think that Google would limit themselves to click-throughs to develop a ranking weight that might provide a query independent ranking of pages. That wouldn’t work well.

  32. Hi David

    My post isn’t about how PageRank is calculated, or your interpretation of how it is calculated.

    It’s about the publication of two continuation patents from the original granted patent, and some significant changes to the claims within those patents. Do you have any thoughts or opinions on those changes?

  33. Hi John,

    These new patents are continuation patents, so their effective filing date is the date of the first non-provisional patent in the chain of patents that they belong to, which goes back to 1998. So, you may have been thinking of these things as obvious 10 years ago, but Larry Page and Stanford were still, effectively, three years ahead of you.

    Frankly, the whole idea or argument as to whether “these things are patentable” is something I really don’t care about at all. I’m trying to learn something from what was filed as a patent, and not the merits of the patent itself, or the patentability of the subject matter.

  34. Hi Chris,

    Under this patent, it’s possible that when PageRank is calculated, if links are to pages at different domains on the same servers that those links might be ignored. But that’s only a possibility mentioned within this patent, and doesn’t mean that it is actually happening.

    The patents say nothing about network ranges or IP addresses. As for geographical location, it looks like there’s the potential for some boosting of PageRank as described under one of the patents if a page has links from a diversity of geographical locations, but the patent really doesn’t explain/describe that in much detail.

  35. Hi Mark,

    I’m not sure that I really took seriously the thought that Google might be calculating a query-dependent PageRank on the fly, which is hinted at in the 8,131,715 patent.

  36. Hi Zain,

    One of the two patents tell us that a link from a home or root page on a site might potentially be boosted because that page would potentially be seen as a more “valuable” page than others on a site. It’s an interesting assumption, but I’ve seen sites where I would sometimes prefer to have a link from a page that is more focused upon a topic relevant to something I’ve written about, or have offered on a page. If someone visits that page and is interested in what it contains, they would probably appreciate my page more if they then clicked through.

  37. Hi David,

    A straight forward PageRank calculation that doesn’t take into account things like server location or geographic location is almost impossible to draw conclusions about when looking at toolbar PageRank and an incomplete list of links that Google might know about and provide through a links search operator or even the more detailed list of links in Google Webmaster Tools.

    I’m not sure that some of these other elements are things that we could prove may or may not be in use by Google. A link from a website to their host/designers site might pass along some PageRank, or it might not, but chances are that Google is looking at many other links than just that one link.

  38. Hi Warren,

    Thanks. I do think that sometimes what we see in patents includes things that might not yet be feasible from a technological stance, but could be something that a Google, for instance, might try to work towards. So the elements of these patents that might point towards an on-the-fly personalized PageRank was definitely something that Google couldn’t do back in 1998, but is probably more capable of doing today. The question is, in the intervening years since it was first hinted at in 1998, has Google come up with better approaches towards personalization of results?

  39. Hi Joseph,

    Actually, the PageRank patents belong to Stanford University rather than Google, but Google has a license to use them, and had an exclusive license up until 2011. It’s possible that the exclusive license may have been extended beyond that, but we don’t know that for certain.

    Why patent things like PageRank? If it wasn’t patented then many other search engines would have been free to use it as well.

    This page also contains some thoughtful answers on why people patent things:

    A snippet:

    1. Why patent something? Isn’t society better served if I publish my research and dedicate it to the public?

    The answer to these questions is perhaps counter-intuitive. Experience reveals that if you dedicate your inventions to the public, then chances are your inventions will never find their way into public use. You will have removed the incentive for a commercial entity to make the investment necessary to bring your ideas to practical fruition and into public use. Even after you have demonstrated that your invention works, hundreds of thousands of dollars in product testing and development, production engineering and marketing still may be required before it is available to and accepted by the public. Without the limited period of commercial exclusivity provided by a patent, no company will have the incentive to make these investments and bring the invention from our laboratory into public use.

  40. Pingback: La recherche sémantique sur Google c'est quoi?
  41. Hello Bill,
    As we see very often, pages with high page rank don’t necessarily rank higher than pages with a much lower page rank.
    Do you consider it more valuable to have a (contextual) back link from a page with high PR or from a page which is ranking high in SERPs?
    At the end of the day, for me, it is the position in SERPs that matters, having a high PR is nice but not a specific target in itself.

  42. Hi Bill

    4. The method of claim 1, wherein the assigning includes:

    identifying a weighting factor for each of the linking documents, the weighting factor being dependent on the URL, host, domain, author, institution, or last update time of the one or more linking documents, and

    adjusting the score of each of the one or more linking documents based on the identified weighting factor.

    I have a point on this, plz tell me that those links which I have made on different directories but might be on the same server are going to be removed from PR “weighting” why

  43. You have to check these things to find out, no massive revelations there but still interesting. It shows for SEO though the importance of ongoing strategies rather than just a big short term project to build links.

  44. It seems counterproductive for Google to use anchor text as an indicator of relevance because most websites that link to another website (because they like its content or whatever) will probably not use that website’s targeted keywords as the anchor text. In many cases, they’ll use the website’s URL or website name as the anchor text. I’d hazard a guess that most links with keyword-based anchor text have been placed there by the website owners themselves or someone who’s doing SEO for them. What do you think?

  45. Hi Eliseo V,

    Right. Rankings for specific queries are based upon a whole bundle of signals, from different types of relevance signals to popularity signals like PageRank.

    I’d probably rather have a link from a high PR page than a link from a page that might rank highly for a particular query, but it’s hard to answer that question as a hypothetical. PageRank is just one signal, but it’s a sign that Google uses to indicate that a particular page is important.

  46. Hi Torrado,

    We really don’t know how much of what is covered within these patents might be actually implemented by Google, but I would guess that many of them are things that Google would be experimenting with.

  47. Hi Dave,

    The search engines are constantly working to find ways to take data that they can collect and measure might be used to satisfy the intents of searchers. They aren’t standing still, and that means that SEOs can’t as well.

  48. Hi Joel,

    People linking to other pages don’t necessarily use the best words or phrase within those links, but as another data point that a search engine might use to rank a page, I think they can still be often useful.

    For example, if you write an article or blog post with a thoughtful and descriptive title, for instance, there’s a decent chance that someone linking to it will use that title. That’s a good reason to try to create great titles for those posts and articles and pages. That’s an approach that anyone linking might follow, and not just SEOs.

  49. Coming to the article from one of the recent posts at SearchEnginewatch. Loved it. Frankly, I enjoyed reading from second half. Text adjacent to links seem to work the best but I doubt the another factor of text in links you pointed here. I have heard (and seen) over optimization filters and penalties because of over targeting anchor links. And things seem to work better when there is a sensible use of anchor variation. I’d love to hear your opinions, Bill.


  50. Hi Sunita,

    Thank you.

    I think it makes some sense for the search engines to look at text adjacent to anchor text as well. There is a lot of chatter on the web, and folklore and mythology that circulates through the search marketing community. I’ve seen a few people write about “over targeting anchor links,” but without case studies to back them up, or information from sources like the search engines themselves. Would love to see someone come out with something like that.

    I do like the idea of having some variation in anchor text pointing to pages, and if the text within those links are reasonably related most of the time, I think that’s a good thing. Google’s patents on Phrase-Based indexing likely give more value to links with anchor text that contain phrases that tend to co-occur with the terms or phrases that a page might be optimized for as well. But those patents don’t say that there’s a loss of value if there are too many links pointing to a page with the same anchor text.

  51. Hi Bill,
    It was good to read and informative too. Search engines are constantly working to make their ranking criteria more specific to provide genuine results for their users. What Google is doing matters for us but you never know what it may do next.

    John K. Taylor

  52. Hi John,

    Thank you. We know the end goals of what Google is trying to do – provide quality results that match the intent searchers have when they type queries into a search box. But you’re right in that we don’t know the roadmap that Google will follows to get there.

    It does seem like they are trying to be a little more transparent, with more blog posts, videos, whitepapers, and interviews that describe some of thethigns they are doing. At least, more transparent than they were in their earlier days.

    I think if we keep that end goal in mind, it helps.

  53. First: sorry for my bad english.

    As Bill says between line, the problem of patent analysis is always the same.
    The patent describe the main principles and some technical points but nothing about factor (weight) given for each indicator.

    And, even though the patent indicated this factors (weights), nothing can ensure you that Google use the factor they describe in the patent…

    From some experiments I realized since the last change of PageRank calculation, I think that Google give less weight for backlink’s anchor. It’s simple and it’s not a bad thing regarding excessive optimization made by SEO teams…

    So, globally, assuming that Google will inevitably change indicators and indicator’s weights for PageRank calculation in futur, the only good way to make a perennial and durable SEO work is to not focus on some specific techniques, even if this techniques currently are the most efficients.

    Instead, you must largely vary techniques you use even if you have feeling of dilute or spend your time not as efficiently as you would hope.

  54. Hi Remi,

    It’s just really difficult to attribute changes in rankings to any one ranking signal as you note. I agree that it makes sense to broaden your efforts to use multiple signals and approaches.

Comments are closed.