Timely Featured Snippets and Sentence Compression
How is a knowledge graph updated when some earth-shaking event takes place? Is a search engine manually editing information in that knowledge graph? It seems like an area that could be using a machine learning element to automate it and keep it up to date.
Another place that would benefit from machine learning would be generating featured snippets that answer questions people might ask at Google, and it appears that Google thought it might be useful there, too. A Wired Magazine article from Monday describes how a sentence compression algorithm behind these featured snippets might be used:
At the heart of this approach is the crawling of a data store of news articles and other sources, with the help of a “massive team of PhD linguists it calls Pygmalion”, and the use of algorithms that are referred to as “sentence compression” algorithms that might generate answers to questions from sources such as that news sources for featured snippets.
Curious and hopeful, I went in search of patents from Google that used “sentence compression” algorithms, and I happened to find one:
Methods and apparatus related to sentence compression
Inventors: Ekaterina Filippova and Yasemin Altun
Assigned to: Google
US Patent 9,336,186
Granted: May 10, 2016
Filed: October 10, 2013
Methods and apparatus related to sentence compression. Some implementations are generally directed toward generating a corpus of extractive compressions and associated sentences based on a set of headlines, sentence pairs from documents. Some implementations are generally directed toward utilizing a corpus of sentences and associated sentence compressions in training a supervised compression system. Some implementations are generally directed toward determining a compression of a sentence based on edge weights for edges of the sentence that are determined based on weights of features associated with the edges.
The patent doesn’t mention featured snippets, but it does mention paraphrasing sentences in a data store of titles and sentences from a news source:
The documents from which the set of headlines, sentence pairs is determined may be news story documents. In some of those implementations, for each of the headline, sentence pairs the sentence is a first sentence of the respective document.
Determining the set of headlines, sentence pairs of the set may include: determining non-conforming headline, sentence pairs from a larger set of headlines, sentence pairs; and omitting the non-conforming headline, sentence pairs from the set of headlines, sentence pairs. Determining non-conforming headline, sentence pairs may include determining the non-conforming sentence pairs as those that satisfy one or more of the following conditions: the headline is less than a headline threshold number of terms, the sentence is less than a sentence threshold number of terms, the headline does not include a verb, and the headline includes one or more of a noun, verb, adjective, and adverb whose lemma does not appear in the sentence.
A Related Sentence Compression Whitepaper
I had hoped to find more that discussed how this algorithm might be used in the generation of featured snippets, but it didn’t provide many details on that aspect of how these algorithms might be used. It does appear to be based on natural language processing. and I went looking at Google whitepapers to see if I could find more. I found a paper that looked related. On a Research at Google page for the paper Overcoming the Lack of Parallel Data in Sentence Compression they tell us is “A subset of the described data (10,000 sentence & extracted headlines pairs, with source URL and annotations) is available for download.”
That data for download includes sentences from news articles that have been tagged as different parts of speech. It looks like a lot of work, but it appears to be done in a way that does take advantage of automating processes that can keep such information up to date, and show timely featured snippets
This appears to be how terms such as “sentence compression” become relevant to what SEOs do.
There is some negative news about this pygmalion project and featured snippets, that describes it as more human driven: A white-collar sweatshop’: Google Assistant contractors allege wage theft. And There’s a discussion on Twitter from late May of this year about payment for the many linguists working on Pygmalion at Google:
This team, Pygmalion, is one of many at Google that creates the data needed to train machine learning models. All the contractors have BAs in linguistics, many have MAs, some have PhDs. They are paid $25-$35/hour from AdeccoI, a staffing firm.https://t.co/M0rngryuOo
— Julia Carrie Wong (@juliacarriew) May 29, 2019
Some posts I’ve written about patents involving question answering:
- 7/19/2007 – Search Engines Crawling FAQs to Learn How to Answer Questions?
- 9/21/2014 – Google May Use Question Answering to Populate the Knowledge Graph
- 10/12/2014 – How Google May Use Entity References to Answer Questions
- 12/30/2014 – Featured Snippets – Taken from Authority Websites
- 12/31/2014 – Featured Snippets – Using Query Intent Templates to Identify Answers
- 2/11/2015 – How Google was Corroborating Facts for Featured Snippets
- 7/12/2015 – How Google May Answer Questions in Queries with Rich Content Results
- 9/9/2015 – When Google Started Showing Featured Snippets
- 11/30/2016 – Answering Featured Snippets Timely, Using Sentence Compression on News
- 6/19/2017 – Google Extracts Facts from the Web to Provide Fact Answers
- 7/10/2019 – How Google May Handle Question Answering when Facts are Missing
Last Updated June 26, 2019.