A Search Results Evaluation Model Whitepaper
Search Results (SERPS) are no longer about showing pages that are ordered by rankings for a query term. A Google paper shows us a different way of thinking about them in our age of structured Snippets and featured snippets mixed with URL search results, with a Search Results Evaluation Model. The paper is:
Incorporating Clicks, Attention and Satisfaction into a
Search Engine Result Page Evaluation Model by Aleksandr Chuklin, Google Research Europe & University of Amsterdam and Maarten de Rijke, University of Amsterdam
Search engine results have gone through some significant changes over the past couple of years. A paper from the CIKMâ€™16 conference in October 24-28, 2016, recently published on the Research at Google pages describes some of the user behavior that may take place around search results. The benefit that the paper brings us is that it describes:
In this paper we propose a model of user behavior on a SERP that jointly captures click behavior, user attention and satisfaction, the CAS model, and demonstrate that it gives more accurate predictions of user actions and self-reported satisfaction than existing models based on clicks alone.
Sometimes people search and expect to find answers to their questions in search results. In these days of featured snippets, that may happen more frequently than when answers might just happen to appear in snippets for pages featured in results. So, sometimes a set of SERPS can answer questions without any clicks.
This paper doesn’t describe the idea of entity metrics, but it reminds me of how the Google patent about them presented SERPS in a way that was different than just a presentation of URLs that were ordered based upon IR and PageRank scores. It is a slightly different set of things to think about when doing Search Results Evaluation. It’s recommended reading:
There are different reasons why some pages or entities show up in search results than just a rank, and they can have value, satisfy searchers, and even lead to clicks that may educate or entertain.
Back seven years ago, I wrote about a paper from Google and Yahoo researchers that talked about satisfaction with search results titled, Evaluating the Relevancy of Search Results Based upon Position. That one focused more on the importance of where results were ranking in search results. There are possibly other things to consider these days.
Like the idea that a searcher might find an answer to their question and not have to click through to a page to find their answer, which can save them time and effort in finding pages pointed out by search results. That would be a case of “good abandonment.”
This new paper also talks about user clicks being a sign of possible satisfaction. Very different from the kind of biometric satisfaction that I wrote about in, Satisfaction a Future Ranking Signal in Google Search Results?.
This paper instead talks about a Clicks, Attention, and Satisfaction (CAS) model. It provides some information about each of those things, in the context of human raters that might interact with search results.
The paper tells us about some of the crowdsourcing approaches used with human evaluators, including a glimpse at some questions asked to determine the relevance of results.
The Search Results Evaluation Model doesn’t tell you about how to show up in search results or rank higher, but it does tell us about how search results are changing and evolving. Search engines are trying to better understand when results might answer the questions posed by searchers in search results. They also tell us that they are using human evaluators to better understand how relevant search results might be and how satisfied searchers might be with the search engine.
Human Evaluators might start judging how well question-answering snippets satisfy people looking for answers to their questions, which is good to see.