When I was in law school, I was a teaching assistant for an environmental law professor. One of the tasks he had me working upon was a review and analysis of electronic databases that could be used to assess natural resource damages when some environment harm took place, such as the Exxon Valdez oil spill.
How do you determine the cost of the spill to the environment, to wildlife, to people who live in the area, to people who rely upon the area for their jobs and welfare. In short, you look at things such as other decisions in other courts where such things may have been litigated.
At the time, the World Wide Web was still a year away, and many of these electronic databases we were looking at were very helpful sources of information. My task was to review those, and see how much value and help they might hold.
Google’s Pierre Far announced on his Google+ page that Google was releasing a new Panda update that supposedly included some new signals that could potentially help “identify low-quality content more precisely.”
The Google+ post also tells us that this change can help lead to a “greater diversity of high-quality small- and medium-sized sites ranking higher, which is nice.”
A new patent application shows off a quality scoring approach for content, based upon phrases. More on that patent filing below, but it might have something to do with this update.
It can be difficult classifying a query for a search engine based upon the query itself.
For example, you could classify the query “lincoln” based upon:
President Abraham Lincoln
The location, Lincoln,Nebraska
I’m going to have to turn up the sound on my TV, and decide carefully what to watch, and test this. It would be very interesting if it works. Is Google clued in to what you are watching on TV? If so, is that through a set top box, or an internet enabled television?
If true, will this change the way that I do keyword research? Will it alter how I create content for the web, or decide upon page titles or meta descriptions? I’m not sure, but I am surprised.
The patent says that it might monitor what’s on TV in your area, and look for queries that might be related to that information. So, if someone searches for “Eagles” and there’s a documentary about the band, the “Eagles” playing on TV in your area, that’s a signal that may influence the search results you receive.
A couple of months ago, I wrote a post about a new patent from Google that was the first Google patent granted to Navneet Panda as an inventor. The patent described a complicated way for Google to judge the quality of websites, and my post was titled Is this Really the Panda Patent?. Simon Penson wrote a followup post at Moz titled The Panda Patent: Brand Mentions Are the Future of Link Building which looked at some other aspects of the patent.
On August 1st, Jayson Demers published a post to Forbes titled Implied Links, Brand Mentions And The Future Of SEO Link Building which covers a lot of the same ground as Simon’s post. I contacted an editor at Forbes and stated that the post plagiarized Simon’s post. Jayson didn’t give me any credit for my post about the patent either, but Simon did.
Years ago, I started referring to search results as recommendations, seeing how they’ve been starting to look more and more like that part of a page at Amazon that says “people who viewed this book also looked at these books.”
When someone searches at a search engine, one of the things they look for in the search results they receive are trustworthy pages (or recommendations) that look (and are) legitimate. How does a search engine deliver pages that are trustworthy?
One way to do that might be to try to boost pages in search results that the search engine feels are more trustworthy – and Google developed a version of Trust Rank to do that with. The inventor of Google’s Trust Rank (which differs from the version that Yahoo invented) is Ramanathan Guha.
Most of us searchers and site owners and search engine optimizers are familiar with Google’s Link Graph, and how Google uses the connections between websites to help in ranking pages on the Web. In part, Google looks at the relevance of the content of a page compared to a query that a searcher enters at the search engine.
In addition to “relevance”, Google also uses the patented method of PageRank, in which the quality and quantity of links pointed to a page are used as a proxy for the quality of the page being linked to. The higher the quality of a page (and the higher PageRank it possesses), the more PageRank it likely passes along.
The link graph is one example of how Google ranks and measures and possibly sorts web pages. Another that Google might look at is the attention graph – how Google might use topics and concepts that may be searched upon frequently to change rankings of pages based upon freshness and hot topics.
Does Google’s newly granted patent co-invented by Navneet Panda describe Google’s Panda Update?
Search Quality vs. Web Spam
Many of the patent filings that I’ve written about from Google address Web Spam issues, and how the search engine may take steps or follow approaches to keep its search results from being manipulated. An early example of Google tackling such issues is their patent filed in 2003 titled Methods and systems for identifying manipulated articles.
But many of the patents I’ve written about involve ways that Google is trying to improve the quality of search results that searchers see.