A trio of patent applications from Google look at estimating the likelihood that an advertisement is a good one, in a method that goes beyond counting click-through-rates (CTR).
They provide an extremely detailed list of factors that might go into a quality score, as well as details of different statistical models that might be generated from gathering information about those different factors.
A system provides one or more advertisements to users in response to search queries and logs user behavior associated with user selection of the one or more advertisements. The system also logs features associated with selected ones of the one or more advertisements or associated with the search queries.
The system further uses a statistical model and the logged user behavior to estimate quality scores associated with the selected advertisements and aggregates the estimated quality scores. The system predicts the quality of another advertisement using aggregated quality scores.
Estimating ad quality from observed user behavior
A system obtains ratings associated with the first set of advertisements hosted by one or more servers, where the ratings indicate a quality of the first set of advertisements.
The system observes multiple different first user actions associated with user selection of advertisements of the first set of advertisements and derives a statistical model using the observed first user actions and the obtained ratings.
The system further observes second user actions associated with user selection of a second advertisement hosted by the one or more servers and uses the statistical model and the second user actions to estimate quality of the second advertisement.
This third patent application discusses the comparisons of advertisements and the different quality parameters associated with them, to determine whether ads should be filtered, where they should be ranked, and whether some should be promoted.
Using estimated ad qualities for ad filtering, ranking and promotion
A system obtains a first parameter (QP.sub.1) associated with a quality of an advertisement among multiple advertisements, where the first quality parameter (QP.sub.1) does not include a click-through rate (CTR).
The system functionally combines the first quality parameter (QP.sub.1) with at least one other parameter and uses the functional combination to filter, rank, or promote the advertisement among the multiple advertisements.
The patent applications list examples of 44 different factors that might be used in a quality score that doesn’t focus upon the click through rates. These include such things as:
If you use paid advertisements through Google, these patent applications may be worth delving deeper into. It is pretty interesting to see all of the user behavior considerations that may go into determining a score and placement for an ad.
The documents all end by noting that conversion tracking may also be optionally used to see if a “direct calibration between predictive values and user satisfaction” can be derived.
This is a very interesting find. I made a post on seomoz.org where I hinted that user behavior is used by Google in their algorithm to help improve organic listings. I’m not surprised to see confirmation that this is the case for the paid search platform.
Great recap and look at the three patents – Thanks as always for sharing such valuable information!
Hi Miriam,
Thanks. That’s a very good question.
When looking at the searches that people perform, and the queries that they use, and other behavior associated with their searches, a lot of researchers were only looking at individual searches.
At some point, they started looking at search sessions, or “chains of queries.” There are a number of papers from 2005 that describe how researchers at search engines could try to understand when queries from a searcher might be related in some way, and when someone might be multi-tasking and searching for more than one thing in the same session.
Here’s a real life example – I’ve done something like this myself:
So, let’s assume that someone starts searching for a particular brand of shoes. They type in [brandname shoes] into the Google search box, and look at the results. They click on one of the ads that they see, and they decide that they want to look at some of the search results.
They go back to the results and take a look, and decide that they should narrow their search a little, and type in [brandname boots]. They see the ad again that they click on before, and try it again, to see if it mentions boots. They look at the prices, and decide to comparison shop.
They see a particular model at the site from the ad, and search for [brandname modelname] in Google for that brand of shoes. They see the ad again, and they see look at some of the organic search results, and visit some of the pages that those link to. Checking the prices at those pages, they decide that the link from the advertised site has a better offer. They return to the search results page, and click on the ad for the third time.
This is the same search session, and they clicked on the ad three times. The ad was high enough quality for the searcher to consider clicking on it in the first place, and the landing page may have been as good as the organic results that they saw for those shoes.
Researchers at the search engine may be mining the user data found in the search engine logs, and have set up a program that might notice this searcher performed a number of related searches in the same session, and clicked on the same ad during that session. Those three clicks appear to be an indication that the searcher found the ad to be of some quality.
I find it fascinating that this kind of information may be looked at to determine the cost and position of an advertisement on the search engine, too.
Thanks, Hamlet and Derek.
Hamlet,
It is interesting to see how user behavior could be used in paid search. 🙂
I agree with you on user behavior being important to the search engines these days. The search engines do collect a lot of data about how people use the search engine, and other pages on the Web. User data also has been mentioned in a number of the patents and papers from Google, and from the other search engines.
The act of linking itself is a user behavior, and that plays a role in the crawling and ranking of pages. Query refinements from users seem to be a means of deciding whether to offer spelling suggestions to people who search for words that may appear to be misspelllings. The determination of what vertical search results appear on a universal search interface also seems to be affected in many instances by user behavior.
I’m watching right now to see if Google Maps will use their new “customized driving directions” to alter the routes that they show people when they try to get from point A to point B.
“How many times a user selects a given ad in a given session.”
Hi Bill,
This is all so fascinating. I hope this isn’t too basic of a question, but I’m having trouble understanding the first factor. I am imagining, for example, a page of Adwords ads. How would someone click on the same ad more than once in a session? I am having trouble visualizing the situation. Would you be able to give me an example of how this would work, in the ‘real’ world of someone using the web? I’d appreciate it!
Kind Regards,
Miriam
Ah, you’ve made that very clear to me, now, Bill. Thank you. I can see how what you are describing could happen. I don’t believe I’ve ever done that before, myself, so I just couldn’t picture it, but now I can.
Thank you for your really helpful answer to my question. This is a really fascinating post!
Miriam
Wow, great post…just came here directly from SE Roundtable. I knew there were a bunch of factors that went into the Quality Score, but never imagined that there were at least 40+ of them. Time to start reading more about it. 😉
I was surprised by the numbers of factors that they listed too, Bill.
The statistical model that they describe is pretty interesting, also.
Hi Dave,
I see how you’re getting to that point. This isn’t so much personalized PPC results, but rather results that rely on a mix of user behavior not directly related to click throughs of the ad itself.
There may be some personalization of PPC in place, based upon the location of the person viewing the ads, and the language preferences they have set in their browser or in Google. But a more personalized PPC result might rely upon something like a look at a person’s Web history, and other individualized search and browsing behavior of an individual searcher. This patent application isn’t using individualized search behavior, but rather aggregated data for a number of searchers/browsers.
There has been a patent application or two from Google that discuss personalized search results in the manner that you are writing about.
Hi Bill,
Another great post!
It looks like they are trying to provide “Personalized PPC Results”
We already know… Google rearranges the organic results based on your historical click behavior… Why not reposition the sponsored listings based on the same personal data as well.
This would help to maximize profits for Google as they zero-in on the users who already prefer “Brand X”. (The rich get richer)
I’ll be using the non-personalized search versus the personalized search more often to see if the PPC listings change with the Organic listings. Has anyone you know of documented this with PPC?
Aloha,
Dave.
Thanks Bill,
It does seem like “Personalized PPC Results” would be the next logical evolution though. I predict we’ll start to see it soon.
I think that you’re right, Dave.
Targeting ads based upon the viewer makes a lot of sense, and seems like a logical and possible next step.
I have always felt that Google “tracks” a user to see if the search result was relevant in natural and paid search.
We once had an issue with our sales funnel which stopped some people from registering to buy because of this the users session time was much shorter then usual (as you must register to view any results) plus our down stream too Google nearly doubled. I noted that almost automatically we saw an increase in the cost of our ads and we started to drop rankings (the issue took nearly a month to fix). When we found and fixed the error session times increased and less people returned to Google to search again within the same category so our ads cost decreased and our natural listings regained their position.
The conclusion I made was that Google saw a user search for “blue widget” came to my site and left within a couple of minutes only to return to Google and search for “blue widgets” again then they go to a competitors site and not return to Google again. Google now thinks my competitor site must be more relevant as they found what they were looking for, Google thinks my site was not relevant and bumps down the relevance and natural rankings as well!
Lesson: That not only SEO and PPC optimisation counts but stickiness and making sure that your code is clean with no technical errors in your sales process.