To those of us who are used to doing Search Engine Optimization (SEO), we’ve been looking at URLs filled with content, and links between that content, and how algorithms such as PageRank (based upon links pointed between pages) and information retrieval scores based upon the relevance of that content have been determining how well pages rank in search results in response to queries entered into search boxes by searchers. Web pages connected by links have been seen as information points connected by nodes. This was the first generation of SEO.
Chances are good that many of the methods that we have been using to do SEO will remain the same as new features appear in search, such as knowledge panels, rich results, featured snippets, structured snippets, search by photography, and expanded schema covering many more industries and features then it does at present.
How Query Streams Might be Used to Build Ontologies
What are query stream ontologies, and how might they change search?
Search engines trained us to use keywords when we searched – to try to guess what words or phrases might be the best ones to use to try to find something we are interested in. That we might have a situational or informational need to find out more about. Keywords were an important and essential part of SEO – trying to get pages to rank highly in search results for certain keywords found in queries that people would search for. SEOs still optimize pages for keywords, hoping to use a combination of information retrieval relevance scores and link-based PageRank scores, to get pages to rank highly in search results.
With Google moving towards a knowledge-based attempt to find “things” rather than “strings”, we are seeing patents that focus upon returning results that provide answers to questions in search results. One of those from January describes how query stream ontologies might be created from searcher’s queries, that can be used to respond to fact-based questions using information about attributes of entities.
There is a white paper from Google co-authored by the same people who are the inventors of this patent published around the time this patent was filed in 2014, and it is worth spending time reading through. The paper is titled, Biperpedia: An Ontology for Search Applications
When you search at Google, the answers you receive sometimes now include additional questions, that often have the label above them, “People Also Ask.” I was curious if I might be able to find a patent about these questions, and I saw that these “people also ask” questions were sometimes referred to as “related questions.”
I searched through Google patent search for “related questions” and I came up with a patent named, “Generating related questions for search queries”. When I looked at the screenshots that accompanied the patent, they appeared to be very similar to the “People also ask” type questions Google shows us today in search results.
The Google Knowledge Graph Search API on a query for Google shows the following Entities and results scores for them. I thought they were diverse enough to be interesting and worth sharing. A couple of the ones listed seem odd, such as the Indian Action movie. “Thuppakki” and the Town in Kansas,”Topeka.” (It seems like there is a song titled, “Google Google” in the film Thuppakki, and in 2010 Topeka renamed itself “Google” to try to attract Google Fiber to the area.) We are told by Google that “Results with higher result scores are considered better matches.”
These are the Google Knowledge Graph Search API results on a search for Google:
One of the limitations of information on the Web is that it is organized differently at each site on the Web. As a newly granted Google patent about Context Vectors notes, there is no official catalog of information available on the internet, and each site has its own organizational system. Search engines exist to index information, but they have issues, as described in this new patent that make finding information challenging.
Limitations on Conventional Keyword-Based Search Engines
Visitors to a website may want to perform certain actions related to Entities (specific places or people or things) that are displayed to them on the Web.
For example, at a page for a restaurant (an entity), a person viewing the site may want to create a reservation or get driving directions to the restaurant from their current location. Doing those things may require a person to take a number of steps, such as selecting the name of the restaurant and copying it, pasting that information into a search box, and submitting it as a search query, selecting the site from search results, determining if making a reservation is possible on the site, and then providing information necessary to make a reservation; getting driving directions may also require multiple steps.
Using a touch screen device may potentially be even more difficult because the site would possibly then be limited to touch input.
A patent granted to Google this week describes a way to easily identify an entity such as a restaurant on a touch device, and select it online and take some action associated with that entity based upon the context of a site the entity is found upon. Actions such as booking a reservation at a restaurant found on a website, or procuring driving directions to that site, or other actions could be easily selected by the user of a site.