I thought this was an interesting question to ask people because I think it’s often misunderstood. Google treats content found at different URLs as if it is different content, even though it might be the same, such as in the following examples:
When I was in high school, one of the required classes I had to take was a shop class. I had been taking mostly what the school called “enriched” courses, or what were mostly academic classes that featured primarily reading, writing, and arithmetic. A shop class had more of a trade focus. I was surprised when the first lesson on the first day of my shop class was a richer academic experience than any of the enriched classes I had taken.
– What other people are searching for, including trending searches. Trending searches are popular stories in your area that change throughout the day. Trending searches aren’t related to your search history.
– Relevant searches you’ve done in the past (if you’re signed in to your Google Account and have Web & App Activity turned on).
Note: Search predictions aren’t the answer to your search, and they’re not statements by other people or Google about your search terms.
Systems and methods consistent with the principles of the invention may provide a reasonable surfer model that indicates that when a surfer accesses a document with a set of links, the surfer will follow some of the links with higher probability than others. This reasonable surfer model reflects the fact that not all of the links associated with a document are equally likely to be followed. Examples of unlikely followed links may include “Terms of Service” links, banner advertisements, and links unrelated to the document.
Google’s original PageRank algorithm is based upon what its inventor referred to as the Random Surfer model, where it ranked pages on the Web based upon a probability that a person following links at random on the Web might end up upon a particular page:
The rank of a page can be interpreted as the probability that a surfer will be at the page after following a large number of forward links. The constant α in the formula is interpreted as the probability that the web surfer will jump randomly to any web page instead of following a forward link.
Do you search through Google on your phone? How do you know whether or not Google is watching you as you do and keeps on eye on whether or not you like the results you receive during your searches? Could Satisfaction with search results be a ranking signal that Google may use now, or in the future?
A newly published Google patent application describes technology that would modify scoring and ranking of query results using biometric indicators of user satisfaction or negative engagement with a search result. In other words; Google would track how satisfied or unsatisfied someone might be with search results, and using machine learning, build a model based upon that satisfaction, raising or lowering search results for a query. This kind of reaction might be captured using a camera on a searcher’s phone to see their reaction to a search result, as depicted in the following screenshot from the patent:
This satisfaction would be based upon Google tracking and measuring biometric parameters of a user obtained after thst search result is presented to the user, to determine whether those may indicate negative engagement by the user with a search result.
Advertising on the Web is going through some changes because of how smart phones and tablets track visitors on a site, and how advertisements may broadcast high-frequency sounds that may act as audio watermarks that other devices can pick up upon. Imagine watching TV, and your TV broadcasts a high-frequency sound from an advertisement that your phone hears, and shares with the advertiser, who may then track whether you search for or purchase the product offered on a web site?
Googlebot Doesn’t Read Pictures of Text During Web Crawls
When I was an Administrator at Cre8asiteforums (2002-2007), one of my favorite forums on the site was one called the Website Hospital. People would come with their sites and questions about how they could improve them. One problem that often appeared was people having problems being found in search results for their sites for geographically related queries. One symptom for many sites experiencing that problem was that the only time the address of their business appeared on the site was in pictures of text, rather than actual text. This can be a problem when it comes to Google indexing that information. Google tells us they like text, and can have troubles indexing content found within images:
Google’s web crawler couldn’t read pictures of text, and Google wasn’t indexing that location information for their sites’ because of that. Site owners were often happy to find out that they just needed to include the address of their business in text, so that Google could crawl and index that information, and make it more likely that they could be found for their location.
Under this new patent, Google adds a diversified set of trusted pages to act as seed sites. When calculating rankings for pages. Google would calculate a distance from the seed pages to the pages being ranked. A use of a trusted set of seed sites may sound a little like the TrustRank approach developed by Stanford and Yahoo a few years ago as described in Combating Web Spam with TrustRank (pdf). I don’t know what role, if any, the Yahoo paper had on the development of the approach in this patent application, but there seems to be some similarities.