Spoken Queries and Stressed Pronouns
The future of searches on the Web will likely involve spoken searches, as more and more people are connecting to the web with phones, and Google has added voice search interfaces to its search on desktop computers.
I thought it was interesting when I ran across a patent that focused on a problem that might arise with those spoken queries and thought it was worth writing about because it’s something that we will need to become acquainted with as it becomes more commonplace.
When Amit Singhal showed off Google’s Hummingbird update, he gave a presentation that showed Google handling searches involving pronouns. It’s worth watching for the information about Hummingbird and how Google is becoming more conversational and can handle things like stressed pronouns. The video is at:
I remembered the presentation about hummingbird and a more conversational Google when I saw this spoken queries patent come out from Google, which explains some of the technology behind aspects of conversational search:
Resolving pronoun ambiguity in voice queries
Inventors: Gabriel Taubman and John J. Lee;
US Patent 9,529,793
Granted: December 27, 2016
Filed: February 22, 2013
Methods, systems, and apparatus, including computer programs encoded on computer storage media, resolve ambiguity in received voice queries. An original voice query is received following one or earlier voice queries, wherein the original voice query includes a pronoun or phrase. A plurality of acoustic parameters is identified for one or more words in the original voice query in one implementation. A concept represented by the pronoun is identified based on the plurality of acoustic parameters, wherein the concept is associated with a particular query of one or earlier queries. The concept is associated with the pronoun. Alternatively, a concept may be associated with a phrase using the query’s grammatical analysis to relate the phrase to a concept derived from a prior query.
I did write about some papers that Google researchers had written about pronouns in the post Searching with Pronouns: What are they? Coreferences in Followup Queries
But, the granted patent from this week had an example worth sharing about an aspect of conversational search that wasn’t covered in one of those papers involving stressed pronouns. Here is an example:
A voice query asks: “Who was Alexander Graham Bell’s father?”
The answer: “Alexander Melville Bell”
A follow-up voice query: “What is HIS birthday?”
The answer to the follow-up query: “Alexander Melville Bell’s birthday is 3/1/1819”
The point behind this spoken queries patent is that the search engine decided that it should tell the searcher the birthdate of the inventor’s father. This was done based upon the fact that the “HIS” in that second query was stressed to indicate that it was about the father and not the son mentioned in that first query.
The patent tells us of a “stress score” for spoken queries that could include “volume, pitch, frequency, duration between each spoken word, and spoken duration of words or phrases.” It tells us that “By comparing the stress score for the pronoun to a threshold, an implementation may determine that the stress score indicates that the pronoun is stressed or not.”
The impact of a stressed query? The spoken queries patent says, “For example, if a pronoun is stressed, it may indicate that it refers to a concept from an immediately preceding query, while a pronoun that is not stressed may refer to a concept from a query that occurred earlier in a series of received queries.” It’s an interesting assumption that does sound like it uses how people actually convey information during inquiries when they are having conversations. The patent does tell us about some of the science behind this determination about stressed pronouns:
For example, if the absolute measure for the volume of the pronoun is 80 dB and the average volume for the other words in voice query is 60 db, the ratio of the volumes is 1.33. This relative volume measure for the pronoun indicates that the volume of the pronoun is 33% greater than the volume of the rest of the voice query. Alternatively, the relative measures can be different between the acoustic parameters for the pronoun and the acoustic parameters for the other words in voice query. For example, if the absolute measure for the time duration of the pronoun is 80 ms and the average time duration of the other words in voice query is 50 ms, the difference in the time duration is 30 ms. This relative time duration measure for the pronoun indicates that the time duration of the pronoun is 30 ms more than the average time duration for the words in voice query. Alternatively, the relative measures of the acoustic parameters for the pronoun can be relative to the acoustic parameters for only the words that immediately precede and follow the pronoun.
The spoken queries patent provides some other examples of how stresses might be understood, including how grammatical differences may play a role.
Interestingly, these types of things may influence spoken queries. For example, if you’ve been wondering about how Google might understand pronouns, now you have an idea of how it could understand stressed pronouns.
27 thoughts on “Google and Spoken Queries: Understanding Stressed Pronouns”
This is a very interesting It is a great article. You will surely like this also because it is a great stuff, yeah itâ€™s give us lots of interest and pleasure. Their opportunities are so fantastic and working style so speedy.
I see this whole voice thing to be a long and necessary road for Google.
Conversational search does seem to make a lot of sense as a path for Google to follow. It’s interesting seeing Google dotting all the i’s and crossing all the t’s.
I see the following things which look like they are unique to spoken queries:
1. Queries may refer to previous spoken queries, and entities within those
2. Queries may be longer and contain more words than written queries
3. Spoken queries may vary in loudness and in speed spoken, and may be considered “stressed” which may indicate importance.
4. I remember reading something from Google (a patent or a paper) that talked about how an accent might be identified and used to personalize search results based upon that accent.
Really interesting! No many blogger cover the subject as you doing here! I just wonder why is Instagram so underrated? In my case I found it as the most effective.
I’ve seen some posts about conversational search, but I don’t think there is a lot of data to base many blog posts upon. As for instagram, I have seen some articles about instagram, but it’s possible that people who are finding it effective need to blog about it to make it more respected.
Nice research. Maybe CSS could be used to desambiguate polysemic queries too.
The patent was focused upon queries themselves, and how Google might interpret spoken queries, rather than how a webmaster might optimize a page for spoken queries – this patent doesn’t give any insight optimizing for those. I thought it was worth writing about because it gives us some insight into how Google is working, rather than how we can better optimize our pages.
I like seeing that Google is working hard at tweaking the finer things when it comes to speech recognition. I wish Siri could understand when I ask it to “call Stephan”, and it responds with all kinds of things other than just calling Stephan – ack!!!
It was good seeing this level of understanding from Google with pronouns. I hope it is that sophisticated in other aspects of speech interaction.
Google is working hard, really nice article. i never see this type of post really interesting..
Great article and very informative things you discuss. I think you done a lot research. thanks a lot for sharing these points,
This spoken queries thing will take a lot of pros and cons but it’s good to hear that google is working hard about this. It is really a good idea when it comes to searching.Hope it will going to be a success.
Some facts I didnâ€™t know, surprise how google is great. thanks for sharing.can you give more post.??? .really impressed..
Thanks for sharing this great informative article. I really like it as you done a lot research. Keep sharing such useful posts.
I really love getting insights into how Google may be doing something like understanding spoken queries like this.
There is lot of talk about google voice search feature and learning how google handles these voice queries will hopefully give me some better understanding about how to optimize them. Thanks for this really helpful piece of information.
I am really happy with articles quality and presentation.Thanks a lot for keeping great stuff. I am very much thankful for this site.
Thank you so much for writing this great blog with very much helpful information.
Absolutely amazing work by Google, albeit in baby steps. It will likely be a long time before interpreting verbal intent beyond simply understanding word pronunciation is a flawless process, due to the seemingly endless multitude of variables, but kudos to Google for continuing to explore and develop. This is definitely a valuable step in that discovery process. Thank you for sharing.
I thought it was interesting how Google was interpreting spoken queries as well. It does seem like it potentially has risks, but it’s good hearing examples of how it might be done.
Thanks for sharing this informative blog. This blog is very useful for everyone. keep it on.
Hi there, thanks for a very interesting article. The example with Bell and the fact that how “his” is stressed may influence how Google understands the query really show how Google is getting smarter. However, voice search (at least not yet) isn’t the “core” search as I believe written (or written keyword-based) search is still the major form of how people do searches. I am wondering how Google could or can detect what we mean if we ask the same two questions but using the traditional methods. How can Google know what we want to stress? Will it ever be possible that it can draw such conclusions based on what we type? Will it measure how fast we type? At what word we stop and for how long?
The patent focuses upon spoken queries, and stresses upon words in those, so it doesn’t appear to have a written comparison. I suspect the amount of spoken queries is growing as are the numbers of searches that take place on mobile devices. I do seem to recall that Google as paying attention to how we type; and we know that Google was trying to provide search results as we were typing:
Maybe they can see how we are typing when we perform written queries, and those might be influenced by how we type. 🙂
Thank you. Happy to hear that you find my posts useful.
Great informational blog.Worth reading
Thank you for your kind words.
Nice blog!! I impressed with your blog Thanks a lot for keeping great stuff. I am very much thankful for this site.
Comments are closed.