Category Archives: Search Engine Optimization (SEO)

Search Engine Optimization tips and strategies and information, from SEO by the Sea, to help make web sites easier to find.

10 Most Important SEO Patents, Part 5 – Phrase Based Indexing

The builder of the largest search engine in the World during the first decade of the 21st century joined Google shortly after building that search engine, and possibly licensed the technology behind it to Google. She worked for Google for a number of years, creating a way of indexing pages based upon the meaningful phrases that appear upon those pages, looking at how phrases co-occur on pages to cluster and rerank those pages, using the phrases to identify spam pages and pages with duplicate content, and creating taxonomies and snippets for pages using phrases. This phrase-based indexing system provided a way to defeat Googlebombing, and to determine how much anchor text relevance should be passed along with links.

A screenshot from Phrase Based Indexing in an Information Retrieval System showing how phrases are identified as good phrases and bad phrases.

Then Anna Patterson left Google to start the search engine Cuil, which was supposed to be a Google killer. Except it wasn’t. Now she’s back at Google, and looks to be working on phrases again.

Continue reading

10 Most Important SEO Patents: Part 4 – PageRank Meets the Reasonable Surfer

PageRank is a measure that stands for a probability that if someone starts out any page on the Web, and randomly clicks on links they find on pages, or gets bored every so often and teleports (yes, that is official technical search engineer jargon) to a random page, that eventually they will end up at a specific page.

Larry Page referred to this person clicking on links as a “random surfer.” Thing is, most people aren’t so random. It’s not like we’re standing at some street corner somewhere, and just randomly set off in some direction. (OK, I confess that I do sometimes do just that, especially when faced with a sign like that below.)

A street corner in The Plains, Virginia, with a sign showing distances to many other cities near and far.

Imagine someone from Google waking up in the middle of the night, with the thought, “Hmmmm. Maybe we’re not quite doing PageRank quite right. Maybe we should be doing things like paying attention to where links appear on a page, and other things as well.”

Continue reading

10 Most Important SEO Patents: Part 3 – Classifying Web Blocks with Linguistic Features

In earlier days of SEO, many search engine optimization consultants stressed placing important and valuable content towards tops of HTML code on pages, based upon the idea that search engines would weigh prominent content more heavily if it appeared early on in documents. There are still very well known SEO consultants who include information about a “table trick” on their sites describing how to move the main body content for a page above sidebar navigation within the HTML for a page using tables. I’ve also seen a similar trick used with CSS absolute placement in HTML, where less important content appears higher on the HTML page that visitors actually see, but lower in HTML code for a page.

Back in 2003, the folks at Microsoft Research Asia published a paper titled VIPS: a Vision-based Page Segmentation Algorithm. The abstract for the paper describes the approach, telling us that:

A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques, our approach is independent to underlying documentation representation such as HTML and works well even when the HTML structure is far different from layout structure.

Continue reading

10 Most Important SEO Patents: Part 2 – The Original Historical Data Patent Filing and its Children

Imagine gathering together 10 extremely knowledgeable search engineers, locking them into a room for a couple of days with walls filled with whiteboards, with the intent of having them brainstorm ways to limit stale content and web spam from ranking highly in search results. Add to their challenge that the methods they come up with should focus upon “the nature and extent of changes over time” to web sites. Once they’ve finished, then imagine taking what appears on those whiteboards and condensing it into a patent.

The end result would likely look like Google’s patent Information Retrieval based on Historical Data. When this patent was originally published as a pending patent application awaiting prosecution and approval back on March 31, 2005, it caused quite a stir in the SEO community. Here are a few of the many reactions in forums and blog posts as a result:

Continue reading

10 Most Important SEO Patents: Part 1 – The Original PageRank Patent Application

I like looking at patents and whitepapers and other primary sources from search engines to help me in my practice of SEO. I’ve been writing about them for more than 5 years now, and am putting together this series of the 10 Most important SEO patents to share some of what I’ve learned during that time. These aren’t patents about SEO, but rather ones that I would recommend to anyone interested in learning more about SEO by looking at patents from sources like Google or Microsoft or Yahoo.

The first PageRank patent application was never published by the United States Patent and Trademark Office (USPTO), it was never assigned to a particular company or organization, and it was never granted. It avoids dense legal language and mathematics that can make reading patents difficult, and it captures the excitement of a candidate Ph.D. student, Larry Page, who has just come up with a breakthrough in indexing webpages that had the potential to be a vast improvement over other search engines at the time it was published.

The top of the cover letter for the provisional patent filing for PageRank.

Continue reading

Expanded Snippets, Google’s Instant Previews, and the Costs and Benefits of Making Changes

The decision process that you go through when deciding to make changes to your site can be tough. Even if those changes are likely necessary and needed, determining the best way to implement them can make you pause, and spend a lot of time considering all the potential alternatives that you might have. You can do a cost/benefit analysis, where you consider how much change you might make to your site, what the benefits of making that change might be, and what the costs might be in both making the change and deciding not to do so.

It shouldn’t require much thought to do things like make your website more usable, but it can, especially if the changes you make change around the look and feel of your pages, and the way that people interact with them. A good example are the changes taking place at Google, where the search engine has implemented a number of new design elements over the past year or so, including new colors and formatting of their search results pages, a different look to how local search results are presented within Web search results, URLs now appearing under page titles and above snippets for pages, and Instant Previews, which show a thumbnail of a page and call out boxes of text showing where query terms appear within those thumbnails.

On the subject of those Instant Previews, one of the challenges that search engines face is presenting web pages returned from a search in a way that helps searchers locate the information they want to find. A typical search result for a web page includes a page title, a URL for the page, and a short snippet that might be taken from a meta description or from text found on the page itself. A searcher is shown a page filled with these document representions to choose from, but sometimes that’s not enough to make a decision as to what page to click through.

Continue reading

Google Patent on Displaying Breadcrumb Links in Search Results

Once upon a time, when you searched the Web at Google, the results displayed were limited to a list of 10 pages with page title, snippet of text from meta description or page content, and URL to that page. We’ve been seeing the search engines diversifying what they might display for certain pages, with special formats for things like forum posts, Q&A listings, pages that include events, and sometimes sitelinks or quicklinks to other pages as well.

The URL shown for some pages might have hinted at the structure of sites and locations of pages within a site hierarchy, if they showed directories and subdirectories within paths to pages. Some websites include breadcrumb navigation on their pages to show you more explicitly where you might be at within a site, and provide an easy way to visit higher categories. Google has started showing those breadcrumb listings for some pages, to make those listings more useful for searchers, and to make it more clear where those pages are within the hierarchy of a site.

An example from the patent of the search engine showing breadcrumb links in a search result instead of a URL for the page listed.

Continue reading

Google Granted Patent on Hostname Mirrors

For years the New York Times website was a great example I could point people to of a very high profile site doing one of the basics of SEO very very wrong.

If you visited the site at “http://newyorktimes.com/” you would see a toolbar pagerank of 7 for its homepage. If instead you visited the site at “http://www.newyorktimes.com/” you would see a toolbar PageRank of 9 for the site. The New York Times pages resolved at both sets of URLs, with and without the www hostname. Because all indexed pages of the site were accessible with and without the “www”, those pages weren’t getting all the PageRank that they should have been, splitting PageRank between the two versions of the site, and that probably cost them in rankings at Google, and in traffic from the Web. Google likely also wasted their own bandwidth and the Times returning to crawl both versions of the site instead of just one of them.

A few years ago, someone with at least a basic knowledge of SEO came along and fixed the New York Times site so that if you followed a link to a page on the site without the “www”, you would be sent to the “www” version with the use of a status code 301 redirect. The change ruined the example that I loved showing people, primarily because even very well known websites make mistakes and ignore the basics. It’s one of the things that makes the Web a place where small businesses can compete against much larger companies with much higher budgets.

Continue reading