How Google May Transform Your Search into Multiple Related Searches (Without Your Knowledge)

Imagine that you want to find a pair of vintage Levi’s jeans for sale on the Web. You go to Google and enter the search terms [vintage clothes jeans]. Your expectation and mine might be that Google performs a search for all three terms, but what if instead it does a first search for [vintage clothes] and then a second search for [jeans] amongst the sites it receives from the first search. It might also include results from pages linked to or from the pages that show up in the top initial search results?

Chances are that you would get a very different set of search results for each approach. And you wouldn’t even know that Google did two searches instead of one. (Though you may have suspected something really odd was happening if you owned a site that sold vintage levi’s jeans and watched Google results carefully.)

A screenshot from the patent showing top results on a search for vintage clothes, with a search box appearing above the results allowing searchers to search more deeply through those top results.

Or imagine that you entered [vintage clothes] as your initial query, and then Google gave you a search box at the top of the results enabling you to enter another search term or phrase to use to search within the top ten results? The image above is a screenshot from a Google patent granted last week that would enable those types of searches within searches.

The patent tells us that in the first version of this related search approach, the second automatic search might include a search of all of the pages that link to, or are linked from a result that appears in one of the top results from the first set of results. Or, if the page showing in the initial search results is the homepage of a site, the second search might include a search of all of the pages of that site.

Google has tried a number of interface experiments over the years where it introduces new interface features that might be there today and gone tomorrow, and it might not be a surprise to see something like this spring up overnight and then disappear. Chances are that Google even performed this experiment before applying for the patent, and was encouraged enough by the results to file the patent.

As a searcher or site owner, do you think the automatic search-within-a-search that I described first (an initial search for [vintage clothes] and then a second search within those results for [jeans] would yield better results than just one search for [vintage clothes jeans]? Would it be more likely or less likely to deliver relevant results for the whole query?

If Google offered a search box, with a chance to search amongst the top results, would that mean that less people might look at the second page of search results? I’m not sure, but I think it might. That may depend upon whether or not searchers were satisfied with the results they saw.

The patent is:

Performing multiple related searches
Invented by Corin Anderson, and Benedict A. Gomes
Assignef to Google Inc.
US Patent 7,991,780
Granted August 2, 2011
Filed: May 7, 2008

Abstract

A first search is performed in response to a received search query. The first search is based at least in part on a first portion of the search query. In the first search, a first set of content items are searched over to identify a first set of search results. Each result in the first set of search results identifies at least one content item of the first set of content items. A second set of content items for performing a second search is determined based at least in part on one or more of the results in the first set of search results. The second set of content items includes content items not included in the first set of search results.

A second search is performed, searching over the second set of content items to identify a second set of search results. The second search is based at least in part on a second portion of the search query. Each result in the second set of search results identifies at least one content item of the second set of content items.

Conclusion

It’s easy to assume that when you enter search terms into Google’s search box that the results you see are from a single search, but this patent challenges that assumption.

Imagine a site owner with a hotel in Los Vegas (an example from the patent), who strives to create the finest buffet on the Strip. He or she may spend a lot of time and effort creating a web page about that buffet, and expect it to rank well in search results on a search for [los vegas hotel buffet]. When someone actually performs that search, Google might initially search for [los vegas hotel], and then search within the top results for that query for [buffet], and never return the page from our buffet owner who may have optimized for the longer term.

Could the process described in this patent have an impact on long tail queries, and the sites that attempt to optimize for them?

It’s possible.

There are two different scenerios described in this patent. The first is where Google breaks a query down into parts, and automatically performs a second search. The second is where Google might give searchers a chance to search within results. Both seem like they might benefit established sites that rank well for more competitive and general terms, to the detriment of less established sites attempting to climb in results by focusing upon longer queries.

Share

47 thoughts on “How Google May Transform Your Search into Multiple Related Searches (Without Your Knowledge)”

  1. Nice Find. The Local map listings / organic results mashup or blended results as they have been called do this but I see this as a search retention or drilldown technique to provide the searcher a qualified result or results without burdening the algo with another complete set of results. It would take some of the burden off the processing if they were able to search a subset of results.

  2. Good dig! If parsing the long tail into components where sites ranking in the broad categories are the hierarchy, that would definitely benefit bigger, older sites. The unfair part would be, if the secondary term is searched only within the first return. Lesser sites, niche sites, newer sites might never get a chance to rank and to get exposure. With user engagements being measured that would be harsh, finding diamonds in the rough and promoting them socially with links and +1, likes and tweets would also be diminished. With structured data on the rise, I can see if the potential for GOOG to make the user interface in more a faceted navigational or constraints type search such as your mock up. For me the question is how to serve the type of content that would adhere to this potential new landscape.

    RE: Could the process described in this patent have an impact on long tail queries, and the sites that attempt to optimize for them?
    Yes, in order to capture long tail traffic you may need to qualify first on the long tail.

  3. I cannot help but think that Google would not apply this type of search to all searches. The Las Vegas Buffet example makes sense however a search for ‘custom dry fly fishing rods’ benefits from long tail search because it would eliminate Wal-Mart and large retailers from the results.

    Obviously it would hurt smaller sites and to be honest I do not think Google has a position on whether it should or shouldn’t hurt small sites. Google just wants to deliver the most relevant result. In the Las Vegas Buffet example it would be more expeditious to break the search into two parts however in many other searches that would not deliver the most relevant search. I suspect that this development should not change the way we optimize most sites for search engines. It is interesting though.

  4. Awesome find! I agree that depending on the accuracy of the search and the length of the query the results may vary dramatically. But on the other hand if Google is finding the top providers/suppliers of one part of your search than chances are the second part of you query will be of at least good quality as well. Take the Los Vegas hotel for example. A result showing the top hotels – wouldn’t this hotel most likely have top buffets as well. After all Google is engineered for those looking for the query not the owner of the hotel with a great up and coming buffet. Maybe? Who knows.

  5. Check out this thread from WebMaster World today ==> seems apropos: Google Bolding On Synonyms For Web Searches http://bit.ly/qaBGmi

    a search for ‘leather couches’ turns up related searches for ‘couches’ ‘furniture’ and ‘futons’ … strangely though, this synonym bolding action only happens when using an adjective as a token in the search phrase. When using just the noun ‘couch’ or couches’ you don’t see the same type of related results (although a lot of the brands in the results are the same).

  6. If they go fully operational with this, I wonder how different it will really make things.

    For instance, at the moment isn’t the rule Left to right importance in the title, with diminishing value for the words at the end?

    Perhaps they already use their algorithm on titles as well as searches? Then again surely anchor text from incoming links will still have a big say in how you rank. All of this sounds theoretically important but for many sales orientated sites, the question is just simply how many visitors did I convert to sales. Sometimes the long tail keywords are a very big percentage of the visitors to the site.

    Hope we get some warning this time anyway, or was this article our warning?!

  7. I think this would be more beneficial to authoritative domains, but to less authoritative domains that have to target these specific queries, it could hurt them. This idea would help improve quality of searches, but because targeting these types of searches would be more difficult, the authoritative sites would win. In reality, I don’t see this happening, but it’s fun to think about.

  8. If google does that, it would really be a big change and a big challenge to the website owners, especially those who are still thinking to start their own. But wont that affect the quality of the search results google may give to the users?

    If I’m a user and I want to have specific information on a specific product, and google gives me results for two general phrases I would go nuts and go find another search engine.

  9. How can such a technique be considered patentable? Where is this anything but an obvious technique, with no inventiveness. Ideas such as this are used every day by software developers as part of an arsenal of techiniques to get the job done – its not an invention.

    While the patent authorities allow such frivolous ideas to become patentable, it makes a nonsense of anti-competitive legislation.

  10. Morning Bill,

    This a really interesting find – thanks again putting the time into what is probably the best resource for what is effectively becoming one of the most important aspects of the worlds most important corporations!

    There are some articles that are really interesting in a curiosity kind of way – they seem to suck up time but are just too interesting to leave alone – like Google Acquiring 1000 IBM patents”
    Some of the news like the authorship tags I didn’t think were brilliant or possible to incorporate but I’ve generally come to think that Authorship Markup is actually viable and a positive development.

    Then there are some articles that make me hope (selfishly – admittedly) that the patent would just go away. I don’t think that things like the google +1 are really very helpful for smaller businesses and I can’t say that I think things like an initial broad reaching search term followed by a niche search term are helpful either.

    My theory is that for bigger companies and the catalogue style retailers this is probably great. A search for book shelves for instance will throw up hundreds of optimised sites offering all manner of things within which to store books. If my precise requirement is a black, lacquered wood, bookcase then by shifting through from generic to niche search while only incorporating the results from the initial broad search I’m surely going to miss the site that’s optimised purely for lacquered wood, black book shelves?

    The water treatment sector is fairly huge with a large number of companies offering solutions to ensure that waste produce from industry is properly managed for all our benefits. We generally provide components for this like Dosing Pumps and other solutions. A search for water treatment then a niche search on our products would be really detrimental for us.

    Maybe someone could let me know if i’ve got the wrong end of the stick on this one – I’ve been up for most of the night thanks to the wonderful folks that chose to riot in London last night…

    Tom

  11. I guess for the users it will definitely yield more refine and relative search results. The multiple searches of a single query may take more time but it won’t hurt when user will be able to find exactly what he was looking for by limiting the area in which he want Google to search. Well, the thing which is confusing me is that will it have some effect on the trends of doing onsite seo or defining title and meta tags :/

  12. Hi Bill
    Looking at all these patents all of them are US patents only. As we know every country has thier own patent rules and as Europen union goes they are very relactant to patent any software so in principal as long as you server is based outside US you can replicate this as you please.
    I have doubt the EU patent authority would give Google patent for this sort of feature. At the end of the day it is only to improve users experience.

  13. Hi Dave

    Google definitely does perform multiple searches for query terms on different databases such as local, and then finds a way to display those on a page. My post How Google Universal Search and Blended Results May Work explores that. But nothing in there really tells us that they might be splitting the queries and performing a search within a search.

    I do like the idea of providing another search box at the top of results for people to search within the top results. It’s transparent in that you know that you’ll only see results from the top results displayed, like in the screenshot. I don’t know that if Google decides to do something like that whether or not it might mean that less people go to the second page of search results.

    It might mean less processing, though for many popular queries, Google often caches results to save on processing. If they are only caching the top ten results for those queries, then this might save on processing. If they cache the top 20 or 30 results, then it might not.

  14. Hi Scott,

    If there are some very relevant results for the long tail queries, then searchers might potentially miss out on pages that are very good matches for their intent if this search-within-a-search approach is used.

    On the other hand, if the search of all the query terms together resulted in a list of pages that just aren’t very relevant, it’s possible that this approach might yield better results.

    Unfortunately, that’s not something that’s discussed in the patent, and we’re not told when Google might decide to do this type of search where they would break down a query into parts.

    We’re also not told how they would decide which part of a query to do a search on first, and then which part of the query to do a second search upon.

    For example, on the search for [los vegas hotel buffet], if the search were broken into a first search for [los vegas] and then a second search amongst those results for [hotel buffet], I would think the results might not be as good as breaking it down into a search for [los vegas hotel], followed by a search within those results for [buffet].

    The more I think about this patent, the more details I want to hear about how it might potentially be implemented.

  15. Bill –
    Very interesting analysis here. I do think that providing a secondary search option (search for “jeans” within the search “vintage clothes”) could provide a better experience and better search results.

    I was at Mozcon two weeks ago where Stefan Weitz from Bing was talking about ways that Internet marketers and webmasters/web developers can help search engines know what their content is all about. I think if you combine the Schema.org/metadata markups with this potential search option, the web could become much better.

    Thanks for all the insights you provide into patents.

  16. Hi Dan,

    Would it make sense to break a search phrase like “custom dry fly fishing rods” into multiple parts? Would it possibly provide better results? Might Google perform two sets of searches:

    1. An initial search for [fishing rods] followed by a search within the top results for [custom dry fly]
    2. A search for the whole query [custom dry fly fishing rods]

    After performing both, might it then compare the two in terms of the quality of results that both yield, and then show you one or the other?

    I don’t think that Google focuses upon the size of the site either, but rather upon delivering the best results that they can.

  17. Hi Scott,

    Thanks. I think you raise a pretty good point about the focus of results being to the benefit of the searcher rather than the site owner. So, what assumptions might they be making that go with that? I’m not sure that the “best” hotels would be the ones that might have the best buffets. They might have the most expensive, but not necessarily the highest quality or the best prices. :)

  18. Hi Anthony,

    I did read through that thread earlier today. Google is expanding queries based upon synonyms, and has been for a while, and a post from one of the Google blogs mentioned that they would highlight the synonyms when the were relevant to the query used.

    Part of the synonym process seems to be in finding synonyms within the same context. A page might have the terms car and auto on the same page, and they are synonyms, but when you place them in a context like “car mechanic” or “auto mechanic” as an “adjective,” they are more clearly used as part of a pattern that shows they are being used as synonyms within the same context.

    One of the posts in the Webmaster World thread does point to a post I wrote about synonyms where I discussed that kind of pattern matching as one of the things that Google might look for to understand the context of the use of words that might be synonyms.

  19. Hi Bruce,

    It’s possible that Google might look at a range of features on pages to decide whether or not it might be a good idea to try to break a query up into parts like this, and which parts to use.

    For example, with a query like [New York hotel buffet], Google could potentially have a few choices of terms and phrases to use to break up that query:

    [new york hotel buffet]
    [new york][hotel buffet]
    [new][york hotel buffet]
    [york][new hotel buffet]

    I mentioned in a comment above that the patent really doesn’t discuss how it might decide what might be the focus of the first search, and what might be the focus of the second search. It’s possible that Google might look at the use of phrases like those on pages, and where those terms might appear on those pages. It might also look in query logs, and at query sessions to see what people tend to search for and how they might modify their queries when they are doing multiple searches that tend to be related.

  20. Hi Jonathan,

    It is interesting to think about how something like this might be implemented (and if part of it already has been), and considering how it might be influenced by things like how authoritative a site might be seen for a particular query.

    If, as I’m thinking, Google might consider seeing what results are like both with this approach in place, and without it, and then comparing the two sets of results, another question that arises is how they are comparing the two different sets of results.

  21. Hi Andrew,

    I think you pinpointed one of the very real challenges that face search engines. If this kind of approach works well, it may present some new challenges for websites, but users may be happier with the results they see. If it produces worse results, Google runs the risk of people abandoning it, and using another search engine.

  22. Hi Bill,

    I understand the points that you’re making, and I definitely agree that the patent process should focus only upon granting patents that are presenting something new and useful and nonobvious.

    Regardless of that though, this patent has been granted, and it describes a process that Google could potentially use. If it is, then it could potentially have implications for searchers and site owners. My focus is more upon exploring what those implications might be then upon questioning the patentability of the patent itself, because it has been granted.

  23. Hi Tom,

    Thanks for your kind words.

    The riots have me concerned for everyone’s safety. I hope that you and your family and friends and associates aren’t harmed, and that the riots end quickly.

    Some of the patents that I see concern me pretty deeply as well, and I hope that when I see those that if they are implemented that Google is careful in how they are doing so. I do see some potential in the process described in this patent for improving some search results, and for potentially making others worse. Sometimes, to use a cliche, the devil’s in the details, or in how a process might be implemented.

  24. Hi Stephen,

    I think that there is some potential in this approach that could impact onsite SEO. If you focus upon longer terms in your optimization, and Google decides to show results where they split up a query, it might mean less impressions and traffic to pages that focus upon those longer terms.

  25. Hi John,

    I usually search through the US patent office filings, which is why I usually link to the USTPO version of a patent, but I do end up seeing many of the same patents from Google over on the WIPO site as well.

  26. Hi John,

    Thank you.

    Speaking of Bing, I spent a little time trying to understand what a “decision” engine was, or was supposed to be recently, and the general idea is that a decision engine attempts to involve more actual input from a searcher to help find results. So, something like the search box above search results that would allow you to search within the top results would be something that you would see in a decision engine.

    I agree completely with Stefan Weitz that webmasters taking steps to make it easier for a search engine to understand and index their content are usually worth taking, and the schema.org metadata is a good step in that direction.

  27. Great article and analysis. I am curious to know when exactly Google uses this type of search. I can’t imagine that that this will effect all types of searches, as Google just tries to find you the most relevant links.

  28. This is definitely a push toward their inclusion of the average user’s final destination and bounce rate for similar keywords/phrases, learning more and more about how people use the search engines to find and stay at specific, trusted websites, can help garner higher quality results for a term or phrase. Again, helping show how important it is for websites to reach authority/trusted status with Google, Bing, and Yahoo. You’ll see higher ranking results for “related” terms and phrases, while it may not be 100% accurate for the users search, you’ll see the authority sites rank highly because they have been trusted and are a common final destination of users that search in that niche/market. It’s been a major factor in eliminating a lot of the exact match domain “ad-sense sites”, that are 10-30 pages(usually fairly weak content), but do not keep users around very long(high bounce rate).

    At least since Panda it seems to be much more prevalent.

  29. Yes, smaller businesses do stand a chance to lose out on the action despite legitimate measures and hard work. Perhaps the patent idea needs to be looked at by experts.
    Very well written and analyzed! Truly indebted for your share.

  30. Does this mean the long tail approach to SEO will start to become less effective? If google are going to start identifying the most important works in a phrase and use those for the search instead of looking for a match on the whole phrase. I can see this hurting the little sites more that the big sites.

  31. I agree with Jonathon Cooper, this seems like it will ultimately only benefit bigger brands on the search front. At the end of the day, smarter searches create a more intelligent search engine. I think educating people on how to properly conduct search queries would make a big difference.

  32. The Most of the searches result which search engine display, most of the people check the first page results only. You can say 70-85 percent people only check the first page of the result and then they move on.

  33. especially when preforming searches of a certain “trade” like plumber, “car rental” or hotel etc, and ebven adding a city name will add another criteria, then Google might even preform up to 4-5 different searches. Besides making separate searches only by the keyword “themes” The Google machine also searches separate databases such as places, videos, images etc. However these searches get an 11th and 12th position which one can say does not influence the ranks of the other organic websites but I say it pushes them down the scroll and that has caused many number one websites to fall below the scroll and adding places on the entire first impression or results.
    According the the “Google Golden triangle” (do image search on that if you don’t know what tit is) those website were pretty much “gone” and feel penalized just because Google is experimenting around with these super smart reg-ex filtered search results – My 5 cents to Google: keep it simple before Facebook takes over with simplicity overnight soon by having the best social search engine in the world instead of wasting time on maps with 50 spammy overlapping places pins

  34. Hi Renee,

    We don’t know for certain if Google is using this approach, but now we do know that they might be.

    I agree that it’s quite possible that Google wouldn’t use this on all searches, but the big question as you’ve pointed out is when would they. I suspect that they would try to limit its uses to those situations where it provides better quality results, or at least to the situations where they think it might.

  35. Hi Mike,

    There are likely many instances of long tail queries where the quality of results are underwhelming at best, but I’m not sure that a blanket approach to providing these types of searches within searches is going to always provide the best results.

    There definitely is the potential to bias results towards content found upon sites that might be considered more authoritative though.

    I do wish the patent provided more information about when they might potentially trigger this kind of search within a search, but it doesn’t.

  36. Hi Tommy,

    Thanks. I suspect that most of the people who could be said to be experts on topics like this are often the same people who are writing patents on these topics.

    Google does have a process for review of the experiments that they perform that they’ve described in detail in a paper that I wrote about in the post We’re All Google’s Lab Rats. Part of that process does involve experts looking at the experiments of others, and offering suggestions and help. I would guess that patents like this one often come out of that kind of experimentation.

  37. Hi Vinney,

    I think it might be worth considering the possibility that a split search like this might happen sometimes if you are considering optimizing for long tail terms. It may be more likely to happen in instances where there might be multiple phrases in a query, or where a phrase can be easily split into meaningful parts.

  38. Hi Seazel,

    A lot of the data that I see reported from the search engines and others do suggest that for many terms people tend to only look on the first pages of search results. It’s also possible that many people will look at the top ten results, and if they don’t see something that might appear to be useful there will refine their search in someway and perform another search than they will click to see the second page, as well.

  39. Hi Elijah,

    I think I prefer letting people see results for a long tail query without the search engine manipulating results by performing a partial search, and then a search within that search. It’s possible that less relevant results might appear in the results for the longer query, but those fewer results that are relevant for the longer term may be more on point than the results that show up where Google might perform two searches behind the scenes.

  40. Hi Gregory,

    I think Google’s prime aim is in providing relevant results, regardless of whether the site those appear upon are bigger businesses or smaller ones. It’s possible that if Google uses the approach described in this patent that it may make things more challenging for sites that rely upon long tail queries, but there are many big businesses that do that as well as small businesses.

  41. Hi Ron,

    I do remember when the golden triangle research came out, showing heatmaps of where people tend to look on search results pages, and those did tend to favor the top results to the detriment of results lower down on the first page.

    It’s tempting to say that the introduction of images and videos and maps results could potentially hurt small businesses, but Google Maps and Place results do often provide opportunities for smaller businesses to appear in Web search results where they might not have before, showing up with icons and possibly maps that might overcome that golden triangle pattern and draw more attention to them.

  42. well we could always say that giving the new ones a chance is fair but I don’t think that a randomized places results (when no reviews are given yet) and then later letting man-made reviews (stars) with poor fraud control decide upon who is first and who not just ruins the idea of “organic” results – maybe Google should get over their silly “I’m feeling lucky” and change that to “let Google decide” or “super smart Google result” and leave the original search button to the good old organic growth that some have put many years & tens of thousands in to be number 1-3 – AND NOT LET some one-day-fly plumber that shows up from nowhere to overtake some local plumbing icons. Google New Zealand for example decided just recently to copy it’s review count (I found that by pure chance) from a local NZ directory called Finda – so if the website had 25 positive reviews on some external directory, all of a sudden this person get Google position one overnight – and knowing how easy it was to make own “fake” reviews on that NZ Directory Finda a few years ago, it was a shock to witness that Google “trusts” such databases, then I wonder where Google gets all the results from in other smaller countries (outside of perfect USA and EU).

    I found another one of these review clone operations happen between Google and a website called wow-city (hint hint)

  43. Hi Ron,

    Without a doubt, Google needs to do much more to improve the quality and trustworthiness of their Places/Maps results. Having said that, I know I’ve spent a considerable amount of time and effort on making sure that Places results rank well for more than one business, in addition to spending considerable efforts in ranking results well organically. I have a hard time faulting the general idea that Places results can be useful to searchers who evidence an intent to find businesses locally, and I’ve found those types of results useful, even in some cases where the businesses listed might not have had web sites.

    I hadn’t heard about Google New Zealand’s use of reviews from Finda, and how drastically the sheer volume of reviews from the site could have that might of an impact upon local results. It sounds like Google may have been giving those reviews too much weight. Thanks for pointing that and wow-city out.

Comments are closed.