When a search engine shows you results for a search, the pages displayed are likely in order based upon a mix of relevance and importance.
But a search engine doesn’t usually stop there, and T may look at other things to filter and reorder search results.
In 2006, I wrote 20 Ways Search Engines May Rerank Search Results, which described several ways that search engines may rerank pages. I followed that up in 2007 with 20 More Ways that Search Engines May Rerank Search Results.
I decided it was time for a sequel or two in this series. I came up with another 25 ways to rerank search results but decided to stop at 10 in this post.
Many of the following are described in patents, and some of those patents were initially filed years ago – prehistoric times in Web years. T search engines may have incorporated ideas from those patents into what they are doing now, adopted those methods and since moved on to something new, or put them in a filing cabinet somewhere and forgot about them (I’d like the key to that filing cabinet).
More important than knowing whether or not Google, Yahoo, or Bing might be using something from within a patent is understanding reasons search engines might have considered one approach or another to rerank search results.
Understanding that can help give you an idea of why a search engine might rerank search results, provide you with a starting point for researching what the writers of the patents and whitepapers included, and give you insight into some of the assumptions behind how search engines perceive search, searchers, and the Web.
Here are ten more ways that search engines may rerank search results:
1. Blended and Universal Search
For many years, most search results at the major search engines were limited to lists of links to web pages. So times, you would see news results or images, but the most common sets of search results pages tended to be a list of “ten blue links.” Now, you’ll often see maps, pictures, tweets, blog posts from social network connections, recent news, and sometimes even actual links to web pages.
When Google launched Universal Search in 2007, one central idea behind it was to present a wider choice of results from Google’s other search repositories involving maps, pictures, news, books, videos, and others to provide a “truly comprehensive search experience.” Or, to put it another way, Google was getting plenty of Web searches, but nobody was clicking on the tabs above the search box to visit pictures or news or other specialized searches.
Before the Universal Search announcement, Google experimented with providing some non-webpages at the tops of its web search results. This was sometimes called “vertical creep” into organic or “blended” results. A Go le patent on the Universal Search interface was filed back in 2003, and an Official Google blogpost by Marissa Mayer places the start of Universal Search back to a 2001 brainstorming session.
There’s a lot more than ten blue links in Google’s search results these days, as the following results on a search for “elephant” show:
Another Google patent filing published in 2008 described how these vertical results were interleavened (Google’s word, not mine) into the main web results. Google was more likely to include non-web results if it could place those in places on a page other than just at the top of the page, as described in the Official Google Blog post Behind the scenes with universal search.
While this process of interleaving non-web page results into search results doesn’t reorder the web pages in those results, it can push web results down on a page or onto the next page. Yahoo and Microsoft also blend non-web results into what you see on their web search.
2. Phrase Based Indexing
Imagine a search engine looking at the content on one of your pages and identifying which strings of words there fit together into meaningful “good” or meaningless “bad” phrases. It might create an index of good terms appearing upon pages on the Web, and when someone performs a search, it might look to see which phrases appear upon a certain number of top results for your query. It might hen rerank the results by considering how many common “good” words co-occur within that set of search results and give more weight to pages with more of those phrases.
That’s one aspect of a phrase-based indexing system that could change the order of search results, as described in many Google patent filings. Some previous posts on other aspects of a phrase-based indexing system from Google:
- Google Aiming at 100 Billion Pages?
- Phrase Based Information Retrieval and Spam Detection
- Google Phrase Based Indexing Patent Granted
- What are the Top Phrases for Your Website?
- Phrasification and Revisiting Google’s Phrase Based Indexing
Google isn’t the only search engine looking at how to rerank search results using phrase-based indexing. Here’s a post about a Yahoo patent filing on the process:
3. Time-Based Data and Query Log Statistics
When we search, the search engines collect information about our searches to glean the intent behind them. A recent Y oo patent filing tells us how the search engine may look through query logs to see if there might be a time-based aspect to our searches. Suppose there are many previous queries related to ours where there might be a time, such as a year, associated with the question. For example, someone searching for the “world cup” this year might see many search results showing information about “world cup 2010” at the top of the results.
A flow chart from the patent filing gives a quick glimpse at the reranking algorithm:
This process could attempt to see if a query has a time-based aspect to it by visiting if a good percentage of queries in its log files indicate a year or some other period, or by looking at query sessions of searchers to see if they refined their queries to include a time-based term or both.
If that analysis indicates a time-based element such as a year, it might rerank search results to boost results to include a temporal term, such as results for “world cup 2010” ranking higher in a search for “world cup.”
4. Navigational Queries
Some searches tend to be “navigational” in nature, and the query is just a shortcut to get to a specific page. For example, I type “ESPN” into my toolbar search box (Google toolbar, Yahoo Search bar, Bing bar) so that I can visit the pages of ESPN quickly (I always forget that “go” between the ESPN and the .com in the URL).
The search engines have identified several perfect matches for these types of navigational queries, and those pages tend to be listed at the top of searches for those terms.
Which pages tend to be the best results for a navigational query? Here are a few posts I’ve written on how a search engine may decide:
- Microsoft on Navigational Queries and Best Match (Microsoft)
- Search Trails: Destinations, Interactive Hubs, and Way Stations (Microsoft)
- Redefining Navigational Queries to Find Perfect Sites (Yahoo)
In a white paper written by researchers from both Yahoo and Google (not sure why the tag-team), Expected Reciprocal Rank for Graded Relevance (pdf), describing how to evaluate pages listed within search results, we’re told that “a perfect grade is typically only given to the destination page of a navigational query.”
So, search results may be reordered to place a specific page at the top of search results when there is an ideal destination page (or perfect page) for one particular query when that query is perceived as a navigational, like my ESPN shortcut.
5. Patterns in Click and Query Logs
A search engine looking through its query logs might find patterns related to the query terms used in query sessions and in choices of links people click upon. The abstract to the Google patent Rank-adjusted content items tells us that:
Click logs and query logs are processed to identify statistical search patterns. A search session is compared to the statistical search patterns. Content items responsive to a query of the search session are identified, and a ranking of the content items is adjusted based on the comparison.
Imagine many people search for “Chevrolet carburetor,” then for “Chevrolet Carborator Rebuild Kit,” and follow up with a search for “classic Chevy carburetor kits.” They then frequently choose “http://www.example.com/classic-chevrolet-carborators.html.” Someone else c ing along with searches for the same or very similar queries during a query session. The page at “ht p://www.example.com/classic-chevrolet-carborators.html” may be boosted and rank higher in search results for that searcher.
6. Google TrustRank and Yahoo Dual Trustrank
In 2004, a Yahoo whitepaper described how the search engine might identify webspam by looking at links between pages. That paper was mistakenly credited to Google by many people, most likely because Google was trying to trademark the term “TrustRank” around the same time, but for different reasons.
Surprisingly, Google was granted a patent on something it referred to as Trust Rank in 2009, though it’s a Trust Rank that is very different from Yahoos. Instead of looking at the ways sites linked to each other, Google’s Trust Rank looks at how well they trust people who have labeled web pages in annotations of those pages, somewhat like the label scrawled on the old photo above.
Google allows people who create custom search engines to apply “labels” to pages, as well as annotations in other places, such as Google’s Sidewiki. Not surprisingly good ideas seem to follow a reuse/recycle practice in search engine circles), Yahoo added a social aspect to their TrustRank as well, mixing “trust” in annotations and user-behavior signals associated with pages and TrustRank scores to come up with something they called Dual Trustrank (double the trust, double the fun?).
Both Google’s TrustRank and Yahoo’s Dual Trustrank could be used to rerank search results.
Do we see hints of an approach like this in Google’s Social Search?
7. Customization based upon previous related queries
The term you just searched for may influence what you see in your next search if they are determined to be related. At least, according to the Google patent Methods and systems for improving a search ranking using related queries
Google sometimes shows an announcement at the top of search results telling you that Google has customized them based upon your location or because of your previous queries. I wrote about this tent in How Searchers’ Queries Might Influence Customized Google Search Results.
Why might Google might consider some queries to be related to others? Some possibilities:
- Others having used the same sequence of query terms previously (whether once or multiple times),
- Queries input by a user within a defined time range (e.g., 30 minutes),
- A misspelling relationship,
- A numerical relationship,
- A mathematical relationship,
- A translation relationship,
- A synonym, antonym, or acronym relationship, or other human-conceived or human-designated association, and;
- Any computer or algorithm determined relationship.
8. Being Linked to by B gs
Microsoft’s Ranking Method using Hyperlinks in Blogs describes how more “PageRank” might be distributed to pages that are linked to by blogs. The patent focuses on explaining how they might distinguish blogs and non-blogs.
Did I write about their approach in Do Search Engines Love Blogs? Microsoft Explores an algorithm to Increase PageRank for Pages Linked to my Blogs. Why blogs? The patent inventors tell us that blogs tend to be:
…frequently updated, more informational than personal, and free of spam.
The approach was tested with many pages, but they tell us that it might lose some value as more “spam blogs” become prevalent on the Web.
This particular method may have lost some value over the past couple of years with the proliferation of splogs. It may be an excellent example of reranking search results that might change over time.
9. By Ages of Linking Domains
While the age of “maturity” of a domain might be something that helps a web page rank higher in search results, a Microsoft patent filing, Ranking Domains Using Domain Maturity, looks instead at the ages of domains linking to another part.
A page might rank higher if it is linked to sites that have some age and have been around the block a few times instead of ones newly on the Web. The patent filing does t l us that the “maturity” of a domain may be reset if the domain expires or changes hands.
10. Diversification of Search Results
The New York Times surprised us in 2007 with Google Keeps Tweaking Its Search Engine. The article introduced the concept of “Query Deserves Freshness” (QDF), which attempts to decide when searchers might want search results with fresher pages or with older pages. It’s a concept worth incl ing in this set of reranking approaches. Still, it leads to the question, “Do the search engines also try to follow a “Query Deserves Diversity” algorithm to provide searchers with diverse results when queries might have more than one meaning?”
The chances are that they do. When someone searches for a term like “java,” the intent might be to learn more about Java programming, the island Java, or the beverage Java. A search engine could just ow the most relevant and essential pages that come up in a search for “java” (probably the programming language everywhere in the world but the island of Java). But some searchers might be interested in the coffee or the island, and some diversity in search results may be a good idea.
A Microsoft patent, Diversifying Search Results for Improved Search and Personalization, tells us what they might look at when deciding to diversify query results.
I covered those in Reranking Search Results Based Upon Personalization and Diversification, and Google and Yahoo both may look at similar factors in deciding when to diversify the search results that they show.
It took some time to read this – could eventually boost your position hhmmm…
Yes, indeed many things do really make sense, but on the other hand “real life” shows us that the perfect results are still far away, and probably will never be found.
In fact some things look contradictionary: Freshness of content versus maturity of domain – should make sense, but little evidence is there that this is really used (at least for a longer period).
And also there user behaviour seems not to be used as a measuring factor, which might make sense as it can be manipulated.
But lets see.
Hi Andy,
Thank you for raising some very good points.
Looking back at the first two posts in this reranking series, I did write a lot less for each of the items I covered. That’s why I stopped at 10 with this post.
Google did tell us that freshness is one of the factors that they do care about considerably, in the New York Times article. But, the algorithm focuses as much upon deciding whether or not results for a query term should include fresh content as it does upon determining where that content might come from. For example, someone searching for “alligator boots” might be more concerned about the price and quality and availability of the boots than they are in getting the “freshest” page. On the otherhand, someone searching for “Gulf Oil Spill” may be more interested in the latest information about the ongoing spill in the Gulf of Mexico.
The “maturity of domain” method described in the Microsoft patent does seem like it might be somewhat contradictory to that “freshness” approach on the surface, but maybe it isn’t. A page from an older and established domain, or a page on a new domain linked to by older and established domains might appear more mature than a page on a brand new domain without any links to it from domains that have been around for a while.
Freshness also is a question of the topic or query term used on a page rather than the maturity of the domain that page appears upon. So it’s possible that a search engine might give more weight to a page on a fresh topic from a mature domain (or one linked to by a number of mature domains) than the same fresh topic on a brand new page (without links to it by mature domains).
There does seem to be more user-behavior factors finding their ways into ranking and reranking approaches. For many of these reranking approaches, the search engines appear to be looking at information found in their search log files more than at things like newly discovered trends in the creation of content or blog posts. For example, a topic might be determined to be fresh not only because there’s a new pages about it on the Web, but maybe more so from seeing a rapidly increasing amount of interest on that topic in searchers’ queries.
Hi Bill,
First off, Great post!
Second I agree about the fresh content vs. mature domain name thoughts. Really what the search engines “ideally” would like to rank higher is fresh content put on a mature site. The reason being is that the search engines have more, shold I say, trust in the mature sites. They like to see sites ranking that are not going to pop on the scene and 3 months later are no longer there or are no longer maintained.
Having fresh content on a fresh site is great and the search engines will see that however unless you get people inking to you that have those mature sites, it’s gonna be a while.
Your thoughts?
I have some internet marketing tips that you may find interesting also if you are interested. I would love to hear your thoughts on some of our articles.
Ryan Davis
CEO
Thanks i just read you first 20 ways, the another 20 ways and now 10 ways. It’s really valuable for me to know because i am not good enough in seo that is why i always read your blog. Honestly says i like your first 20 ways most. Thanks
I definitely believe point 4-5 has a great impact nowadays on the serps. There is no other good way of determining what site the users like, specially on high volume keywords. Time spent on site and bounces play an important role here I think. Both these things are easily tracked on high volume keywords.
I have to agree with Alamin, your blog posts always give us an insight on how search engines act. And if people would understand better how these search engines act, then SEO would really be very much easier. Thank you for this post of yours. It is a long but good read. Very informative.
is really important ages of linking domains?
Hi Ryan,
Thanks. What’s interesting about these possible different reranking approaches is that a search engine may attempt to apply more than one of them at a time, which can make it hard to pinpoint why one result might outrank others.
And there may be competing assumptions involved within individual reranking approaches. As I wrote in my comment to Andy, freshness of results isn’t always important for a particular query.
Imagine the search engine looks at a number of different signals to decide whether it should emphasize fresh content in search results.
One of those could be if there was a sudden and rapid increase in searches for a particular query term. That might indicate that there’s something going on in the world that we possibly should be aware of – this sort of considers a search engine as current events/ideas awareness mechanism.
Another signal regarding freshness might look at the “ages” of a top certain number of results for a query – if they all tend to be older pages, or ones that the search engine learned about a while back, the search engine might not consider freshness to be important to that query term. This signal may be viewing a search engine more as part of a reference library – all of the books (or pages) on this subject are old, and there are no new books on it, so freshness for this topic or query isn’t important, and we aren’t going to emphasize fresher content. If the search engine starts seeing newer results for that query or related queries, it might be an indication to show fresher content.
So, for example, someone proposes a new model for “evolution” and a lot of people start writing about it, and searching for it. It may become a query that does deserve freshness.
Hi Alamin,
Thank you. These compilations of ranking results have been fun to write, which must be why I’ve returned to the topic a few times.
Thank you, Andrew,
I think sometimes people create models of how search engines act that are too simplistic, and don’t question how a search engine might explore ways of presenting information that goes beyond matching query terms to documents that contain those terms. In a lot of ways, search engine results are like multi-faceted puzzles, with a number of possible competiting assumptions behind the different facets. Looking at different kinds of reranking methods, you can see that different approaches may emphasize different aspects of how a search engine might rank pages.
Bringing universal search into the picture means that a search engine recognizes that people might want more than just textual web page results. But it wasn’t until the search engines could define whether a video might be more or less relevant than a web page or a news article that we started seeing those non-web pages appearing in places other than the tops of results. Once we did, the search engines started showing more of those non-web pages.
Identifying ideal pages for navigational queries means that some sites that might not be the most relevant, according to information retrieval ranking methods, or important, according to quality metrics like link analysis, should still be the first result. And it seems that the search engine might allow those navigational “ideal” or “perfect” pages to show up first regardless of other reranking approaches as well.
So, if I search for “BP” at Google, even though the company’s name is very much a hot topic with lots of searches and lots of new pages and blog posts and news articles written about them, the first result is the company’s home page rather than pages or news articles about them, likely because it has been identified as a “navigational query.” Of course, the results under the home page contain fresh news articles and content that may be ranking higher than it should based upon relevance/importance alone, because it is fresh content.
Hi Per,
Sometimes it’s easier to see the effect of some reranking approaches than others, and whether or not they may even be used. For example, the pages at the top of navigational queries tend to stand out as ones that have been moved to the tops of results. It’s harder to tell with some of the others, like # 5 on the influence of patterns in click and query logs. It may just influence the results that we see, but it’s possible that some other reranking approaches that look at click and query data as well influence those results as well.
Regardless of whether it does or not, we have seen signs from the search engines that they are willing and ready to adjust or customize our search results based upon our previous searches and clicks and those of others.
Hi UΓΕΈur,
It’s quite possible that the search engines are looking at some kind of historical data to influence the ranking of search result. It’s tempting to say that the age of a site might be something that a search engine might value, but a site can be old and not too good, or freshly published and a great resource.
One of the things that the reranking approach I mentioned in # 9 on ages of linking domains assumes is that new sites that are linked to by older established sites are likely to be higher quality sources of information than new sites that aren’t linked to by older established sites, and may be better than older domains without links like those as well.
Of course, it’s possible that none of the major search engines is using this approach, but thinking about the ages of sites, of links to those sites, and of the ages of pages linking to other pages probably isn’t a bad idea.
One of the more interesting aspects of the patent mentioned is that it says that a search engine might “reset” its perception of the maturity of a domain if it expires or changes ownership. Do search engines do that? It’s possible that they might look at other things as well. If a domain suddenly appears to have new owners, that by itself isn’t surprising. Companies are sold everyday, or go through mergers. But, if there appears to be a change of ownership, and other changes to a site such as a drastic shift in the topics it focuses upon, those might then be a signal to no longer treat a specific domain as if it were mature.
Hey thats a cool roundup of some of the many factors that maybe messing with your rankings, but seriously how many patents are actually used?
The customisation based upon previous related queries is already been used on AdWords results with their broad match session based, but i’m not sure they would allocate the resources for individual users. I could see them starting to play with the did you mean or those blue related search terms they are showing up as a smarter way to do this, the CTR for those related terms may influence the primary search term in time.
The microsoft patent could be interesting and easily be expanded to cover social media references, but the point is that the blog or site with the most mentions is not always the best site it maybe the biggest spammer, loudest marketer or just plain lucky. That doesn’t directly rate to quality but I know it can have a massive impact on crawl rates.
Hi David,
Thanks. With some patents filed by the search engines, it’s fairly obvious that they’ve incorporated the approaches described by those into what the search engine does. For example, Google has a number of patents on things like web history, personalized search, sitelinks, the onebox for maps and other applications, webmaster central applications, navigational results listed at the tops of search results pages, and many others that provide details about how those may be implemented.
With other patent filings, like the customization based upon previous queries, you have to look a little harder, but you can see signs of their use. For instance, Google does sometimes show a message at the top of search results stating that results have been customized, and to click on a link to find out how. If you do, the page you arrive at either tells you that your results might have been customized based upon your location or upon previous searches that you performed. All of the search engines have published patent filings or white papers or both describing the query suggestions that they sometimes present at the top or bottom of search results or in the dropdowns under search boxes as you type in a query.
Some additional patent filings describe things that a search engine might be doing that you have to look even closer to see whether or not they might be using them, and may have to do some testing and experimentation to see if they might be using the processes contained within those patents. A couple of examples are where a search engine might show searchers different results based upon a preferred country or language.
Still others might be considerably harder to see the results of, or whether they might be used by search engines. The Microsoft patent on boosting the pagerank for sites with links from blogs may be one of those. There’s are a couple of Google local search patent filings that tell us that a business might rank higher in local search results based upon mentions of a business with some additional location information included with those mentions without requiring an actual link. Does Google do that? It’s worth experimenting with.
There are likely patents published by the search engines which describe approaches that haven’t been used, and may have been written to protect the processes they contain as intellectual property and possibilities.
One of the problems with looking at patents is that they tend to be a few years old once they are granted, and may reflect ideas that have been explored and possibly even replaced by newer ideas. But sometimes even the older ones continue to have value. The Google patent on a Universal Search Interface I mentioned above was filed way back in 2003, developed from brainstorming sessions as far back as 2001, and Google didn’t officially launch Universal search until late 2007.
It’s also possible that a patent can provide some insight into something that Google might do in the future. For example, a Google patent on Web history was filed even further back than the Universal Search patent, and a couple of whitepapers published by Googlers this year describe some very interesting ways that Google Web History can be expanded upon that look like they can make it very useful. I thought that was interesting in light of a recent statement from Google that Web History is turned on by default for everyone with a Google Account because they may not know how useful it really is.
When a pending patent from a search engine is published, in many cases, it’s often only around 14 months old, so patent applications tend to cover topics that are fresher than granted patents. It’s worth taking the time to see if there might even be a white paper or two from a search engine that might describe what’s in that patent application – It’s not uncommon to run across one from Microsoft or Yahoo that cuts out much of the legal language from the patent, and possibly even describes some experiments that have taken place while developing the technology described.
I’m not sure that I could guess what percentage of patents are actually used, but even ones that are more likely to not have been used often provide some interesting insights into the direction of research at a search engine, as well as some of the thoughts and ideas behind their development.
What’s interesting to me about the domain “maturity” is how a search engine can tell if a domain has been traded or transferred. The only method that springs to my mind (for determining who a site owner is); Is through the Domain Registrar info (WHOIS), but the Whois data is not accurate (Given that for $5 I can hide this information anyway).
Unless it uses other signals from the site itself?
You really put some great information into this article.
As far as the maturity vs. freshness, think of it this way: if you hear something new from a 15 year old,then later hear the same information from a 40 year old, whose information are you going to put your faith in? The teenager or the mature adult?
Basically, it’s the same thing with a website. Older websites generally have a little more authority, simply because they’ve existed longer, and have built up a reputation. The fresh content merely lets Google know that the site is still active and providing up-to-date content.
Jennifer
How do you explain one-page website with hundreds of backlinks ranked number one on Google listing vs. website with hundreds of keyword focused pages with less backlinks and ranked much lower? How to beat him?
Hi Sataris,
There may be other sources of information that can be helpful in determining whether or not a business and its site has changed ownership.
For instance, Google has started requiring that sites using the Google Merchant center verify their accounts and ownership.
Verify and Claim Your Website URL in Merchant Center
Likewise, anyone using Adwords or Adsense would need to provide some information to Google.
Google Local Business Center also has a requirement of verification for ownership of a business.
In addition to someone explicitly sharing information about their business with Google to use one of their services, there may be other places the search engine could possibly look. Contact information and location changes on a site could be one sign of different owners. I’ve seen at least one search-related patent filing (Google’s patent on Information retrieval based on historical data ) mention that a search engine could start looking at things like hosting changes and changes to name servers, as well as drastic changes to the topics covered by a site, and linking changes to get a sense of whether or not there might have been a change. It also mentions that the search enigne might create a “list of known-bad contact information, name servers, and/or IP addresses [which] may be identified, stored, and used in predicting the legitimacy of a domain and, thus, the documents associated therewith.”
You are right – it can be challenging to identify whether a domain has been traded or tranferred, especially if nothing much about the site changes after the transfer. And if there isn’t much that changes, perhaps a site should retain its maturity. But if a site changes ownership, moves to a new host, changes a considerable amount of content on its pages, updates with a new design, removes a good number of links and replaces them with a good number of new ones, alters or hides whois information, and so on, then it many ways, it has become a new site.
Hi Jennifer,
That is an interesting analogy. Thank you.
@Bill
Yes ive seen that additional localised text it can help but there still needs to be some link back to their site on the page to get a true measurable benefit.
I do agree that Yahoo patents are typically much easier to read, but I haven’t take as much time around Microsoft patents as I could.
But I guess it would be similar to Coke’s recipe, not all the really juicy ideas are patented in case their competitors can reverse engineer it or improve on it…
@Sataris well Google is a registrar so they can get a whole lot of juicy details not publically available, and mostly i think hiding the WHOIS data can be a negative signal…
@Jennifer I agree with Bill it is an interesting anology about the age, but it can also depend what is the information you are seeking.
If i ask my cousins about Jonas Brothers and then ask my uncle the same question, where would i put the faith in a best response?
My uncle’s answer might be about who writes their music or who produces their albums and he doesn’t really like them so is not going to talk about it anymore. My cousin may provide information such as when there were formed, background on the band,who they are dating, when they are touring, how much their cds cost, how many albums they have sold and if I want he is a MTV interview i can watch on his iPhone.
So the point is that even thou the older person may have authority and more knowledge, they cannot possibly always know everything and stay upto date via Twitter/Facebook on whats happening with the Jonas Brothers.
The dinosaurs also existing longer than most species but where did that get them, i think too many sites rest of that authority factor, and providing upto date content may no longer be sufficient if I can get a newer entertainment site like TMZ telling me what they just ate for lunch. We are consumers and one could say are moving towards the attention span of a gold fish, so which way will Google go?
I think this is a valuable post, particularly for part time and DIY SEOs. A 3 or 4 ranking might remain in the regular SERPs, but your traffic can get clobbered by blended search, etc. Very interesting to get in depth detail on these elements.
Regarding the point you made about blended results wherein results include documents apart from the usual mix of web-pages and News-bits, pdf. documents that originated from .org or .gov links seem to have a greater prevalence. I believe that this is simply due to the fact that such extensions denote a greater degree of credibility than other, lower-ranked sites.
Hi David,
Not quite sure what you’re saying. Google will customize some sets of search results based upon queries that you’ve previously performed if it thinks those earlier queries are related. They will also sometimes customize search results based upon location as well. In both instances, that customization means that they results that you are seeing have had some pages boosted in search results based upon either earlier queries that you’ve performed, or upon whether or not they think the location might be helpful.
Hi Zee,
In that instance, it might be that the search engine considers the single page website to be more focused and relevant for the specific query that you entered, or more “important” based upon the backlinks to the page, or both.
How to beat that site? One step is to try to increase how relevant or important or relevant and important your page might be perceived by the search engine. That could mean taking a close look at the architecture of your site to see how search friendly it is, the anchor text used in links to that specific page from both inside of the site and outside, the links pointing to your pages and possibilities for getting new links, and a number of other factors.
This series of posts on how search engines might rerank search results provides some additional information on things that a search engine might look at in addition to how relevant your page might be for a query, and how important the page might be considered based upon things like PageRank.
For instance, is there another page on the Web that substantially duplicates the content of your page, so that your page is being filtered out of search results? Is the other page being considered a navigational ideal, or perfect page, for that query? Does the other page contain some fresh content in it that yours doesn’t, for a query where the search engine thinks that fresh content is beneficial to searchers? Some of the reranking methods that I’ve included in the three posts within this series may influence the rankings of your page or the other page.
Hi Larry,
Thanks. There are a lot of potential reasons why one page can rank higher than others based upon things that we might not consider when we just look at links to a page and the keywords that appear upon it. Hopefully these posts on possible reranking approaches that the search engines could be using can help to identify some of those.
Hi anubhav,
I’m not sure that I could state with any certainty that government or .org pages get any kind of boost in rankings based solely on the fact that they use a .gov or .org.
There may be other reasons why some of those pages rank well, (or are reranked), such as being an ideal navigational result page for a particular query.
There isn’t a requirement that a .org site actually be a nonprofit, like there is for a site using a .gov being a government site. But even if there was, there’s no guarantee that a .gov or a .org (or even a .edu) may be the best source of information for many queries.
Very good read, although admittedly I didn’t understand half of what was said. But that’s ok – I’m learning.
From what I can tell, your point you made about the search engines customizing search results based on past searches for a similar query seems like it will become more and more popular with the ever increasing idea of personalization. We already have the ability to “like” particular searches in search engines – soon enough the search engines will just do it for us.
Which makes it that much more important that you be informative AND personal in your content creation.
Thanks for the read – I’ll be back for sure.
What happens to domain maturity when the entity is simply renamed? For example, it is not all that rare for the name of a governmental agency to change without there being any change at all in that agency’s role. If an agency has been around for 20 years, but the legislature changed its name 9 years ago, shouldn’t the domain maturity be 20 years?
And the same could happen with a corporation. For example, when British Petroleum rebranded itself as BP, there might have been a profound change in its domain name. (I don’t know that there was, but let’s imagine.) Would the new bp.com have been any more of an authority on extracting and marketing petroleum Γ’β¬β or any less of an authority on protecting the environment Γ’β¬β than the old britishpetroleum.com?
And for governmental agencies, domains can change even when the agency’s name doesn’t. In Texas, a whole host of state agencies are about to have their domains changed from [abbreviation].state.tx.us to [abbreviation].texas.gov. Will all the pages of all those agencies take a hit on maturity scores when that change is implemented?
I read the patent application, but I didn’t see where that twist on a changing name was covered. Hopefully, they’ve thought of those eventualities.
I think the biggest change will be the implementation of an algo that uses social media and the “voting” public as a large part of it.
Hi Thomas,
Thanks.
The idea of customizing results based upon previous queries may become more popular, though its possible that Google may use different ways to allow those previous searches to influence the results that we see. For instance, the example that you provide, where we can “like” some results that we see for certain queries.
Hi Cliff,
Good question. If you’ve ever seen a site change domains and not use things like 301 redirects to inform the search engines of the move, you know that can be a real problem. Google also added the ability to inform them of changes in Webmaster tools to try to capture that information as well.
Unfortunately, I’ve seen many government sites change domains without bothering to guide the process intelligently by using 301 redirects or update links to the new addresses on other government sites that link to those pages. I recently had to raise the issue in a town meeting in my town to get the town to use a redirect on the older .com version of their web pages to the newer .gov version.
The focus of that microsoft patent was fairly narrow in describing how links from older mature sites might help make a newer site be considered more mature. It did mention how a change to a site might “reset” their analysis of how mature a site might be, but you’re right – it didn’t cover domain name changes. How a search engine treats changes in domain names is somewhat of a gray area, and it’s a change that should be undertaken with an understanding that there is a risk involved.
Hi Web Design,
There are a few different reranking approaches that may involve social media, and they may have a growing influence on the rankings of web pages in the future. It’s definitely an area to watch.
Hi Bill,
Long time reader first time poster. I’ve been looking at some possible ranking factors (largely through your patent summaries) of search engines and to that end have created a simple spreadsheet applying weightings to some possible factors (link no longer available ). I’d be interested on your thoughts on the weightings and whether these ranking factors could in fact be implemented over such a large network.
Cheers,
chris
Hi Chris,
Thanks for being a regular reader, and for deciding to leave a comment on this topic.
It’s hard to apply specific weights to different factors, and I wonder sometimes if it’s worth trying. I know it helps to be aware of as many possible factors that the search engines look at, and anticipate that they might play a role in how well a page might rank for a specific query.
Your document is pretty interesting, and I would love to see a write up of the ranking factors that you’ve used and how you think they might influence how a page may be ranked. I do like many of your choices of things to consider.
Very informative post. Im new to SEO and still am trying to learn the basics. Thanks for sharing this post. If you get the chance, feel free to visit my site.
Hi Jason,
You’re welcome.
As an individual who has been doing SEO for some time, re-ranking search results has been something I have found hard to explain. The paragraph on Phrase Based Indexing was quite interesting has left food for thought.
I have often thought that the possibility of a massive change to the way Google ranks search results is quite scary as a change that was to put less emphasis on keywords within domain names and far more weight on the quality of links over the quantity would really make a difference to the results Google returned which could mean for many that they have wasted much time promoting their sites with techniques that quickly become old news in the world of SEO. I guess adapting to the change and reading great articles like the one above is the best chance an SEO has of staying on top of the game. Thanks for sharing.
Hi Andrew,
One of the difficulties might be that there are just so many different kinds of reranking approaches or filters that search engines might use. It isn’t something that a lot of people discuss, which is why I’ve been writing some posts on the topic.
The phrase-based indexing patent filings are definitely worth spending some time with. There’s definitely a decent possibility that Google is using that kind of indexing, and even if they aren’t, the description of how Google could use phrase-based indexing is a good example of how it might apply a reranking approach.
Hi James,
Thank you. Change seems to be one of the constants in the world of search engines these days – we often hear from people like Google’s Matt Cutts that Google makes at least one change a day to their ranking algorithms, for instance. There may be some value in terms of SEO to domain names, but there are only so many domain names to go around, and often names for businesses and sites are chosen even if they have little to do with the main product or service offered by a site, such as Amazon or Yahoo.
As for links, under a system like PageRank, quality has always been more important that quantity. A single good link can have more value than thousands of bad links – that’s the whole point behind PageRank.
SEO is very confusing at times. You can be on page 1 of google one day and not even found the next day??
Hi Jamison,
That is one of the challenges of SEO. Your rankings can change based upon what your competitors do with their site, they can change based upon changes to a search engine’s algorithms, and they can change based upon changes you make to the site as well. It also seems that the search engines may be basing more aspects of ranking on user-behavior data as well, so changes in the ways that people search can also influence your rankings.
Very informational post. I know that Google has over 200 ranking factors; although, you only need to follow about ten of these well to rank higher.
I know that Google are always finding better ways to rank sites and are always implementing new ways such as IP location and even include twitter results now.
Hi Craig,
Thanks.
Google does supposedly look at a wide range of signals in ranking pages, and some of those seem like they carry more weight than others. In addition to those ranking factors are a large slew of filters and reranking approaches, which I’ve been writing about in this series of posts.
If you address a number of those initial ranking factors when creating a web page, it’s still possible that your page may be reranked in search results, which is why I’ve been trying to draw some attention to them, here.
Since May Google has been reducing page rank and pushing many pages to the secondary index. This is creating a hard time for SEOers.
Hi Joe,
Is this something that you’ve been seeing on sites that you monitor? It is the kind of thing that would account for some sites not doing as well as they might have in the past for a number of longtail terms. I’ve seen a few forum threads discussing this, and a few responses from people at the search engines that suggest such a change was made to bring higher quality content to search results. Not a very clear answer in itself, that may imply accruing more links to deeper pages of a site or higher quality content or perhaps both.
As impressive as all of these innovations are the sad truth is that the better Google get at what they do, the tougher the job of tweaking rankings for client becomes. Also with all of the (optional) clutter available via localized search on the SERPs the absolute value of a #1 ranking can only be diminished… or not so?
Hi Matthew,
It’s true that it might be more difficult to make changes to a site that may help it to rank higher, but understanding reasons how and why a search engine might rerank pages gives you some idea of the reasons why, and possibly some things to do that may address those obstacles.
Thank you! Points 4 to 5 has have a great impact nowadays how I work with SEO.
Hi Andrew,
You’re welcome.
Definitely, those are issues that I try to keep in mind as well, when I do SEO.
I find click patterns has become an increased ranking factor this year. What is your experience?
Hi Markus,
I think click patterns are probably something that the search engines have been paying more attention to, in ranking pages. But I suspect that it’s just one part of many user behavior signals that may influence how web pages are ordered in search results.
I’m with Markus on this one. I’ve also found that click patterns – and especially the clicks on the back button – play a huge roll in rankings nowadays.
So does getting mentioned by big guys on twitter π
/ Nabil.
Hi Nabil,
I’m not sure that it’s really all that easy to determine that the kind of click patterns I’m writing about above might have in ranking without access to the query log files of the search engines.
On the other hand, if people are visiting pages from search results, and are very quickly clicking their back button to return to the search engines, that might be a clue to the search engine that the page visited may not be what a searcher intended.
Hi Bill,
Your article is really interesting and full of information. I already knew most of it because i’m a search engine consultant but it’s really nice to get these informations concentrated on a same page. Your explanations are really comprehensible.
What I think about the future of results ranking is that more semantic technologies will be used to provide the best quality answer to a search query. Google just begins with semantic technologies but has already understood that it’s necessary to give internet users other ways to search an answer in its index. By example, the magic wheel, chronological results, by date, …
Hi Mike,
Thanks. It is interesting to see Google offer alternative methods of ranking pages like chronological results, and using things like the Wonder Wheel. While many of the reranking approaches I’ve been writing about tend to happen behind the scenes, Google is offering more ways for searchers to make changes to the results that they show.
For example, you can change your location to see results appropriate to a specific location. You can now change the time period of results to things like “any time,” or “latest,” or “Past 24 hours,” or even a custom date range. You can look at “sites with images,” “Wonder wheel,” results, “related searches, a “timeline,” view, “nearby” results, or “translated foreign pages.”
I suspect that Google could explicitly offer a number of other ways for us to filter the search results that we see, but I wonder if there’s a point where it all becomes too much. Maybe it wouldn’t be if they offered more of those on their “advanced” search pages.
@Bill Slawski – Thanks for replying to my comments Bill. Love this site π
Hi Andrew,
Thanks for your kind words. You’re welcome.
Very complicated post for me but but worth the read, especially if you are dealing
with SEO. Phrase based indexing description is very interesting!
I have 2 questions:
-Point 9.
Does the rule ‘maturity vs. freshness’ exist in Google’s ranking algorithm (2011)? I have seen many new baby domains to outrank huge authority sites. I have the filling that this factor has a small or no weight in ranking.
-Inbound links
Also quality and content relevant backlinks are the most important factor in Google ranking right? Because i can’t do anything to outrank a competitor if the inbound links are better from my links.
Hi Theo,
Thank you. Regarding maturity vs. freshness, it’s quite possible that new and fresh results may sometimes rank highly in search results when there seems to be a somewhat unusual upswing in interest in a topic by searchers. For instance, if you do a search for the [Declaration of independence], you might find most of the top search results to be fairly old. While it’s something that people search for, it’s not a topic that often gets a lot of searches in bursts. Now, if it were discovered that there was an author of the Declaration that no one knew about, and suddenly people are searching for [Declaration of Independence] a lot more, and looking for newer results, Google might start showing more fresh results for the query.
Relevance is still an extremely important factor. Individual links can be pretty weak signals by themselves, especially if they are from pages with low PageRanks, but keep on adding more and more and more, and they have the potential to hold a stronger role than onpage relevance. For example, the Adobe Reader page ranks number one in Google for the phrase [click here] even though those terms don’t appear on the page because of all of the links on the web that point to the page with the words [click here]. It really does help though to make sure your page is relevant for a specific query before you start relying upon links.
I am truly grateful you decided to only post about 10 reasons search engines rerank rather than 25. That was already too much for me to wrap my head around. One question (if i understood more of what you were saying i’m sure i’d have many more questions), i dont understand the Trust Rank thing. Just as I thought I understood it, you compared it to the label on the picture with the police dog and I was completely lost. Is there another place I can read up on Google’s Trust Rank?
Great insight with #10. I must have missed that New York Times article. Diversification is a great idea for search engines but it makes it tougher on SEO specialists. Do you agree?
Hi Traci,
I started the post with a list of 25, but it was getting hard for me to wrap my head around all of them as well.
What I was pointing out with the photo was the annotation scrawled at the top of the picture, “Police Dog and Arrested Man.” Google’s version of trustrank is unlike Yahoo’s version. Yahoo’s version attempt to find “good sites” on the web by following links from other good sites, on the assumption that good sites usually only link to other good sites. Google’s version of trustrank relies instead upon annotations found on the web about web sites from people that it believes it can trust.
As for #10, I think that knowing that Google will attempt to bring some diversity to search result makes it easier on people doing SEO because they have some predictability of how search results may behave. Chances are that if you were selling coffee on the Web, and you knew that if you tried to optimize one of your pages for a “java” related term that you only stood a slim chance of ranking well through this diversity factor because most results involving Java would probably be related to the programming language, you would probably choose something else.