Somewhere in an alternative universe, it’s possible that one of the most feared hitters in baseball might have instead been known as one of its greatest pitchers. Babe Ruth started out as a pitcher for the Boston Red Sox in 1914, and when approached about getting his bat into the lineup on a daily basis in 1918, his manager Ed Barrow responded that “I’d be the laughingstock of baseball if I took the best lefthander in the league and put him in the outfield.” A couple of years later, Ruth was sold to New York’s team for an unprecedented $125,000 where he proceeded to hit 54 home runs for the Yankees, and begin a pretty good career hitting a baseball instead of throwing it at people.
In 1920, anyone looking for information about the Babe probably weren’t too interested in his pitching career. Likewise, when someone searches today for [world series champion], it’s likely that they are looking for fresh results. How does a search engine like Google determine when searchers might prefer fresh results, and when they might prefer older results?
Yesterday, I wrote about how Google may be looking at the semantics associated with HTML heading elements, and the content that they head, and how the search engine might be looking at such content with similar headings across the Web to determine how much weight to give words and phrases within those headings.
That post was originally part of the introduction to this post, but it developed a life of its own, and I ran with it. Here, we’re going to look at semantics related to other HTML structures, primarily lists and tables.
I’m going to bundle a handful of patents together for this choice of one of the 10 most important SEO patents, since I think they work together to illustrate how a search engine might use semantic structures to learn about how words and concepts might be related to each other on the Web. Some of these patents are older, and one of them is a pending patent application published this week. I’m also going to include a number of white papers which help define a process that might seem to be very much behind the scenes at Google. I’m going to focus upon Google with this post, though expect that similar things may also be happening at other search engines as well.
How important are heading elements to the rankings of webpages by search engines?
I’ve seen arguments by people who write about and study search engines and SEO very closely, which often appear written up in “SEO Expert Ranking Lists,” that HTML heading elements (<h1>, <h2>, etc.) are very important, arguments that heading elements were once important and are no longer, and arguments that heading elements were never important. Sadly, all of those arguments are likely wrong. Not so much about the importance or lack of, but rather about the reasons for that importance.
It’s possible that a search engine might notice when a word or term or phrase appears near the top of a page, or above a wall of text. It’s also possible that a search engine pays attention when those are shown in larger font sizes, or bolder than the rest of the page text, or in a different font than the remainder of the words on the page. But that prominence and that display isn’t really what a heading element is about. HTML has a font size large attribute and property. There’s also a bold property. Any words on a page near the top of that page might be said to be more prominent than others.
You can use many HTML element attributes and values and/or cascading style sheet properties to make words within different HTML elements bolder and larger, and to transform them to all capitals or a different font or color, or all of those if you want. You can purposefully place certain text at the top of a page to make it appear that the rest of the page is described by those words.
In the last installment of this series, we looked at how Google may be using phrase based indexing to use the fact that many phrases often tend to co-occur with other phrases within the content of web pages, to re-rank those pages. When we look at phrases, we also need to drill down to a special set of phrases describing named entities, or specific people, places, or things. In addition to trying to understand which phrases might tend to co-occur with those named entities, the search engines may look to other sources such as Wikipedia, Freebase from Metaweb, the Internet Movie Database (IMDB), and different map databases to attempt to understand when a phrase indicates an actual (or fictional) entity.
Google, Bing, and Yahoo all look for named entities on web pages and in search queries, and will use their recognition of named entities to do things like answer questions such as “where was Barack Obama born?”
Back in 2007, I wrote about a Yahoo patent describing how Yahoo! might crawl a webpage, and then recrawl the same page around a minute later to see if any of the links on the page had changed. It might do that to try to identify what it called “Transient Links,” or links that pointing to things like advertisements that might change on every visit to a page, which aren’t links that the search engine would want to crawl and index. The post is A Yahoo Approach to Avoid Crawling Advertisement and Session Tracking Links.
Google was granted a patent this week on a similar topic that looks at “transient” content on web pages. While this kind of content might include advertisements as well, that change regularly on return visits to page, it could also include things like current weather forecasts (Warrenton, Virginia, 40 degrees and cloudy) for example. That kind of content changes on a regular basis, but often has little to actually do with content found elsewhere on a page.
Google would want to be able to identify transient content so that it wouldn’t index pages based upon it, and it wouldn’t show advertisements that focus upon it either.
The builder of the largest search engine in the World during the first decade of the 21st century joined Google shortly after building that search engine, and possibly licensed the technology behind it to Google. She worked for Google for a number of years, creating a way of indexing pages based upon the meaningful phrases that appear upon those pages, looking at how phrases co-occur on pages to cluster and rerank those pages, using the phrases to identify spam pages and pages with duplicate content, and creating taxonomies and snippets for pages using phrases. This phrase-based indexing system provided a way to defeat Googlebombing, and to determine how much anchor text relevance should be passed along with links.
Then Anna Patterson left Google to start the search engine Cuil, which was supposed to be a Google killer. Except it wasn’t. Now she’s back at Google, and looks to be working on phrases again.
PageRank is a measure that stands for a probability that if someone starts out any page on the Web, and randomly clicks on links they find on pages, or gets bored every so often and teleports (yes, that is official technical search engineer jargon) to a random page, that eventually they will end up at a specific page.
Larry Page referred to this person clicking on links as a “random surfer.” Thing is, most people aren’t so random. It’s not like we’re standing at some street corner somewhere, and just randomly set off in some direction. (OK, I confess that I do sometimes do just that, especially when faced with a sign like that below.)
Imagine someone from Google waking up in the middle of the night, with the thought, “Hmmmm. Maybe we’re not quite doing PageRank quite right. Maybe we should be doing things like paying attention to where links appear on a page, and other things as well.”
In earlier days of SEO, many search engine optimization consultants stressed placing important and valuable content towards tops of HTML code on pages, based upon the idea that search engines would weigh prominent content more heavily if it appeared early on in documents. There are still very well known SEO consultants who include information about a “table trick” on their sites describing how to move the main body content for a page above sidebar navigation within the HTML for a page using tables. I’ve also seen a similar trick used with CSS absolute placement in HTML, where less important content appears higher on the HTML page that visitors actually see, but lower in HTML code for a page.
Back in 2003, the folks at Microsoft Research Asia published a paper titled VIPS: a Vision-based Page Segmentation Algorithm. The abstract for the paper describes the approach, telling us that:
A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques, our approach is independent to underlying documentation representation such as HTML and works well even when the HTML structure is far different from layout structure.