How May Click and Query Log Patterns Influence Search Results?
Imagine that many people use Google to perform a search for “orange,” and then “banana,” and then “pineapple,” and then choose the web page “http://www.example.com/fruit.htm” in the search results they see.
Now imagine that Google looks at the information it collects about what people do when they search, and finds click and query log patterns showing that there are a large number, a statistically significant number, of people who search for “orange,” and then “banana,” and then “pineapple,” or possibly the same search terms in a slightly different order. They tend to click on “http://www.example.com/fruit.htm.”
Google may also notice in query logs that people are looking for some very related terms during query sessions, such as consecutive searches for “banana,” “an apple,” and “pineapple.”
Since this second set of queries for “banana,” “an apple,” and “pineapple,” is so similar to the query sessions that contained the search terms “orange” and “banana” and “pineapple,” where people were choosing the page “http://www.example.com/fruit.htm,” Google may choose to adjust the ranking for “http://www.example.com/fruit.htm,” for people using those very related terms in their search sessions.
Google was granted a patent on this query log patterns process this past week:
Rank-adjusted content items
Invented by Mayur Datar, Kedar Dhamdhere, and Ashutosh Garg
Assigned to Google
US Patent 7,610,282
Granted October 27, 2009
Filed March 30, 2007
Click logs and query logs are processed to identify statistical search patterns. A search session is compared to the statistical search patterns. Content items responsive to a query of the search session are identified, and a ranking of the content items is adjusted based on the comparison.
Query Log patterns can be of great value to search engines since they can be used to see what visitors to the search engine are performing.
39 thoughts on “Search Engines May Adjust Rankings based on Click and Query Log Patterns”
Would this mean as a website owner, we should think of the natural or possible progression of search queries, and to see if our site meets that progression to add value? I have been performing keyword research lately with the goal of increasing traffic. I was considering semantic options at this time; however, I was not considering an approach based upon a taxonomy of those terms. Back to the drawing board.
Well,this would be great!I guess there is nothing more to do rather than paying closer attention the content and structure of our web site. We also have to conduct keyword research in a way to accommodate better users’ search needs.
That would definitely make sense, and I can see value in propagating higher results for a page that falls in to this situation
It all comes back to thinking like your visitor rather than a search engine, and structuring your site in a logical way, progessive way.
Very good article, and I completely agree! Google is just like ‘big brother’, and takes every opportunity to adjust their rankings if their is too much activity.
I think real-time search technology is starting to make some of these minor search engine improvements moot.
This idea is similar to another idea about the Vince Update. Check this iCrossing post. But in the iCrossing post the guy highlighted brands and this patent talks about normal queries.
You just have to wonder how high a priority can be placed on this data when adjusting the rankings. Certainly other “standard” data would take priority (title tag, links, etc.) There is only some much data that even Google can process efficiently.
I personally think this is a great idea, as it will force designers, UX and SEO’s to strike a balance between focusing on the both the user and the spiders. I think, as it sits now, this is done outside of Google’s algorithm indirectly, in that pages which focus on this balance (between the user and the search engines) are found though seo. They are then used (and liked to) by searches and therefore have a higher likelihood of being linked to which helps their rankings, and the circle continues.
I think that’s not a bad idea. We don’t have access to the data that search engines do, but that doesn’t keep us from performing those searches ourselves, and seeing how well they might fill someone’s informational needs, and trying to anticipate how they might refine queries that they may use.
I think it’s often a good idea to do at least a little keyword research for sites on an ongoing basis, as well as checking search results for terms that people are finding your site for – not so much for your rankings, but rather to see what else is showing up in those results, whether other sites, or query refinement suggestions, or images or videos or news results that might show up.
Good point – I agree completely. That should be the initial goal of keyword research – are you using words on your pages that people looking for what you offer will likely use to search using, and will likely expect to see on your pages? How well can you anticipate how they might refine their queries to find your pages?
It can be tough, trying to step into the shoes of a searcher, and the possible situations that they might find themselves in that might lead them to your site if you use the right words. A lot of times it does come down to figuring out who the audience is for your site, and what they would want to see, and providing it to them in words they might use, and in a structure that makes sense to them.
Hi People Finder,
That’s an interesting point. What interests me the most about this is that a search engine would mix information that it has about web pages with information about how people search and click on search results to try to understand the context of searches better. Is this approach something that real-time search technologies makes unnecessary, or is it an approach that might help return better results, even if they are real-time?
Both of the theories in the icrossing article do mention the search engine considering information found in query refinements and in which pages people click in response to theories, so there definitely appears to be some overlap. What we’ve heard from Google about that update was that it wasn’t necessarily focused upon brands, even though some appear to have gotten a boost in search results.
Yes, I agree. This isn’t a ranking method by itself, but rather an adjusting of rankings for some pages when Google might find some kind of search pattern involving query sessions and decisions to click on pages during those sessions. We don’t know what percentage of searches might be impacted by a method like this, but I expect that some of the processing can happen before someone actually searches, for a certain percentage of queries. This process would involve search patterns of a statistically significant level, so it might be less likely to be used for less popular and more unique queries.
I don’t think that this process is aimed at adjusting search results to limit traffic to certain specific sites, but rather to try to understand the context of searches based upon patterns that might be perceived in large numbers of search sessions from other people.
Hi Answer Blip,
I do like that this process looks beyond what it finds on pages upon the Web to data it collects about how people use the search engine, and the pages that they choose. You’re right that sites that are optimized better may end up showing up more often in search results because of the process. In some ways that’s good, and in others it might make it a little harder for sites that have good content but that also have bad rankings to show in search results. It does provide some incentive for site owners to optimize their pages better, and to make them pages that people end up visiting.
THe wonder wheel and relates searches on Google might give us a glimpse of what users might be looking at. If as a website owner I include those terms in my body copy and interlink those pages, that shall me feature prominently, right?
I would definitely look at the words and phrases in Google’s Wonder Wheel and related searches, and consider using them in body copy if appropriate, but I wouldn’t rely too much on those, or limit myself to including just those. It’s quite possible that Google is only showing a limited amount of “related” terms and phrases in those features, and for some words those might not be good matches – especially for terms that might have multiple meanings.
I’m just a new-learner in search engine stuffs. This post gives me a new thought that I don’t understand entirely. So if Google adjust its process on rankings based on people query and click logs, isn’t it the same as Google’s suggestion to write focused content? I mean, content with related words and written naturally for human?
Not necessarily. What this post is telling us is that Google is looking at more information than just what appears on the pages of a site to rank search results. It’s looking at patterns in the way people search, and choosing pages that are clicked upon frequently when those patterns seem to appear in their search sessions.
While that might tell us a little about how satisfied people are with the final pages that they end up upon, it may not be telling us too much about how to write content for pages.
Google is always making something new for improvements which actually gives good outcomes for us. I think there are new ways of ranking and it might changed few months from now.
One of the challenges that face us is that it’s possible for a search engine to make changes that may not improve things, or may favor one type of result over another. We do know that the search engines are constantly looking for ways to make search better, and that they do make changes on a regular basis. It does make things interesting.
Interesting stuff. Does this mean we should try and relate content from our site more generally. So if we specifically cover the topic of “bannanas” we should try and cover the topic of “apples” to. Would these increase SERP’s?
It sounds like you’re trying to get some kind of actionable steps that you can take based upon how Google might look at patterns within a query session.
This patent filing provides us with some ideas on how Google might pay attention to patterns in search queries sessions when it believes people are looking for a particular resource, and have been refining their queries to find that resource. It may be difficult for us to anticipate a string of searches that more than one person might use on a regular basis to find one of our pages, but the patent filing does suggest to me that a site that provides a good user experience, and addresses topics on its pages that one would consider related might see some benefit from this reranking approach.
This technic are still in use by google today? If I make a lot of searchs one time my searchs will be recorded and the google will bring more links from what I searched before, but if I clean the cookies and all about my historic google still bring results as before the clean to me?
Very interesting patent. But how strong do you think this could be as a ranking factor?
I do think that these types of patterns are being viewed and used by Google still today.
This isn’t so much about Google keeping track of your individual past searches, but rather patterns that they see in aggregated data from many searches and many searchers. If you clear your cookies, and then perform a number of searches, and Google recognizes a pattern in your new searches that it’s seen from many other searchers, that may influence the order that search results are presented to you.
It’s not so much a “ranking factor” as it is a reranking approach based upon data that Google has collected about past searches and page selections from other searchers.
In short: semantic web + logs = better search results.
If you use the concept of a “semantic” web as many do – a structure of content that contains meaning within the structure itself, then I’m not sure that “semantic” plays a strong role in the process described in the patent. But, the the search engine is definitely creating a body of associations between searches and pages people click upon to attempt to understand semantic relationships between queries.
Comments are closed.