Systems and methods consistent with the principles of the invention may provide a reasonable surfer model that indicates that when a surfer accesses a document with a set of links, the surfer will follow some of the links with higher probability than others. This reasonable surfer model reflects the fact that not all of the links associated with a document are equally likely to be followed. Examples of unlikely followed links may include “Terms of Service” links, banner advertisements, and links unrelated to the document.
Google’s original PageRank algorithm is based upon what its inventor referred to as the Random Surfer model, where it ranked pages on the Web based upon a probability that a person following links at random on the Web might end up upon a particular page:
The rank of a page can be interpreted as the probability that a surfer will be at the page after following a large number of forward links. The constant α in the formula is interpreted as the probability that the web surfer will jump randomly to any web page instead of following a forward link.
Do you search through Google on your phone? How do you know whether or not Google is watching you as you do and keeps on eye on whether or not you like the results you receive during your searches? Could Satisfaction with search results be a ranking signal that Google may use now, or in the future?
A newly published Google patent application describes technology that would modify scoring and ranking of query results using biometric indicators of user satisfaction or negative engagement with a search result. In other words; Google would track how satisfied or unsatisfied someone might be with search results, and using machine learning, build a model based upon that satisfaction, raising or lowering search results for a query. This kind of reaction might be captured using a camera on a searcher’s phone to see their reaction to a search result, as depicted in the following screenshot from the patent:
This satisfaction would be based upon Google tracking and measuring biometric parameters of a user obtained after thst search result is presented to the user, to determine whether those may indicate negative engagement by the user with a search result.
Visitors to a website may want to perform certain actions related to Entities (specific places or people or things) that are displayed to them on the Web.
For example, at a page for a restaurant (an entity), a person viewing the site may want to create a reservation or get driving directions to the restaurant from their current location. Doing those things may require a person to take a number of steps, such as selecting the name of the restaurant and copying it, pasting that information into a search box, and submitting it as a search query, selecting the site from search results, determining if making a reservation is possible on the site, and then providing information necessary to make a reservation; getting driving directions may also require multiple steps.
Using a touch screen device may potentially be even more difficult because the site would possibly then be limited to touch input.
A patent granted to Google this week describes a way to easily identify an entity such as a restaurant on a touch device, and select it online and take some action associated with that entity based upon the context of a site the entity is found upon. Actions such as booking a reservation at a restaurant found on a website, or procuring driving directions to that site, or other actions could be easily selected by the user of a site.
These details come from an anonymous source who also gave us a bit more details on the project. The report states there will be a new feature integrated, allowing users to outline specific areas of the image in order to directly target their searches.
In Google Goggles, one can only search the whole image, which has proven to bring plenty of discrepancies. Images often display plenty of distractions, background items and other objects that may throw off a search result. According to the sketch provided, the system will also be able to recommend retailers for purchasing products, as well as other details.
Furthermore, it is said this technology has also been tested in “wearable computing devices”. This could suggest this technology may come to products like Google Glass and possibly even VR (or AR) headsets.
Back in September of 2009, I wrote a blog post that I titled Google’s 10 Oddest Patents. The first of those that I included in that list was one named Instrument for medical purposes, I included it mostly because Google was a search company, and it felt odd that Google would have a patent on a medical process. That one used “ultrasonic sound to investigate the structural makeup of biological tissue in organs and vessels.”
Times have changed, and since that time, Google has restructured and put itself under a holding company structure with the name Alphabet running all elements of the company. A branch of the Company had evolved that was being referred to as “Google Life Sciences,” and it changed names recently as well, to Verily Life Sciences.
What role and what kind of impact might these new subsidiary have? I was wondering if Google would make changes to the patent assignments it had made along with the name changes, and I was surprised to see them do so, where they assigned 148 patents to Verily Life Sciences on two different days. It’s an interesting list, and I’ve provided it here. They may technically have ownership under other patents as well, but this list points to a number that could possibly become products that the company offers to the public, after any government approval that they may need to pursue.
In the post, the author (Chuck Rosenberg) tells us how they improve image searching at Google by labeling images with entities, rather than text strings. The entities they used are entities that you would find at a source such as Freebase. He tells us that they use Freebase Machine ID numbers for those labels:
As in ImageNet, the classes were not text strings, but are entities, in our case we use Freebase entities which form the basis of the Knowledge Graph used in Google search. An entity is a way to uniquely identify something in a language-independent way. In English when we encounter the word “jaguar”, it is hard to determine if it represents the animal or the car manufacturer. Entities assign a unique ID to each, removing that ambiguity, in this case “/m/0449p” for the former and “/m/012×34” for the latter.
Advertising on the Web is going through some changes because of how smart phones and tablets track visitors on a site, and how advertisements may broadcast high-frequency sounds that may act as audio watermarks that other devices can pick up upon. Imagine watching TV, and your TV broadcasts a high-frequency sound from an advertisement that your phone hears, and shares with the advertiser, who may then track whether you search for or purchase the product offered on a web site?