Visitors to a website may want to perform certain actions related to Entities (specific places or people or things) that are displayed to them on the Web.
For example, at a page for a restaurant (an entity), a person viewing the site may want to create a reservation or get driving directions to the restaurant from their current location. Doing those things may require a person to take a number of steps, such as selecting the name of the restaurant and copying it, pasting that information into a search box, and submitting it as a search query, selecting the site from search results, determining if making a reservation is possible on the site, and then providing information necessary to make a reservation; getting driving directions may also require multiple steps.
Using a touch screen device may potentially be even more difficult because the site would possibly then be limited to touch input.
A patent granted to Google this week describes a way to easily identify an entity such as a restaurant on a touch device, and select it online and take some action associated with that entity based upon the context of a site the entity is found upon. Actions such as booking a reservation at a restaurant found on a website, or procuring driving directions to that site, or other actions could be easily selected by the user of a site.
These details come from an anonymous source who also gave us a bit more details on the project. The report states there will be a new feature integrated, allowing users to outline specific areas of the image in order to directly target their searches.
In Google Goggles, one can only search the whole image, which has proven to bring plenty of discrepancies. Images often display plenty of distractions, background items and other objects that may throw off a search result. According to the sketch provided, the system will also be able to recommend retailers for purchasing products, as well as other details.
Furthermore, it is said this technology has also been tested in “wearable computing devices”. This could suggest this technology may come to products like Google Glass and possibly even VR (or AR) headsets.
Back in September of 2009, I wrote a blog post that I titled Google’s 10 Oddest Patents. The first of those that I included in that list was one named Instrument for medical purposes, I included it mostly because Google was a search company, and it felt odd that Google would have a patent on a medical process. That one used “ultrasonic sound to investigate the structural makeup of biological tissue in organs and vessels.”
Times have changed, and since that time, Google has restructured and put itself under a holding company structure with the name Alphabet running all elements of the company. A branch of the Company had evolved that was being referred to as “Google Life Sciences,” and it changed names recently as well, to Verily Life Sciences.
What role and what kind of impact might these new subsidiary have? I was wondering if Google would make changes to the patent assignments it had made along with the name changes, and I was surprised to see them do so, where they assigned 148 patents to Verily Life Sciences on two different days. It’s an interesting list, and I’ve provided it here. They may technically have ownership under other patents as well, but this list points to a number that could possibly become products that the company offers to the public, after any government approval that they may need to pursue.
In the post, the author (Chuck Rosenberg) tells us how they improve image searching at Google by labeling images with entities, rather than text strings. The entities they used are entities that you would find at a source such as Freebase. He tells us that they use Freebase Machine ID numbers for those labels:
As in ImageNet, the classes were not text strings, but are entities, in our case we use Freebase entities which form the basis of the Knowledge Graph used in Google search. An entity is a way to uniquely identify something in a language-independent way. In English when we encounter the word “jaguar”, it is hard to determine if it represents the animal or the car manufacturer. Entities assign a unique ID to each, removing that ambiguity, in this case “/m/0449p” for the former and “/m/012×34” for the latter.
Advertising on the Web is going through some changes because of how smart phones and tablets track visitors on a site, and how advertisements may broadcast high-frequency sounds that may act as audio watermarks that other devices can pick up upon. Imagine watching TV, and your TV broadcasts a high-frequency sound from an advertisement that your phone hears, and shares with the advertiser, who may then track whether you search for or purchase the product offered on a web site?
A couple of months ago, I wrote about a Google patent that involved rewriting queries, titled Investigating Google RankBrain and Query Term Substitutions. There’s likely a lot more to how Google’s RankBrain approach works, but I came across a patent that seems to be related to the patent I wrote about in that post, and thought it was worth sharing and starting a discussion about. The patent I wrote about in that post was Using concepts as contexts for query term substitutions. The title for this new patent was very similar to that one (Synonym identification based on categorical contexts), and the more recent patent was granted on December 1st of this year.
The new patent starts off describing a scenario that is a good example of how it works. The inventors tell us: