Category Archives: Search Engine Optimization (SEO)

Search Engine Optimization tips and strategies and information, from SEO by the Sea, to help make web sites easier to find.

Google Plus Box Patent Application

When you perform searches in Google, sometimes you will see within the listed search results one which has a plus sign next to it. If you click upon the plus sign, you are shown some more information. A new patent application published at the USPTO website provides some information about how expanded and collapsed data in search results might work.

It also raises some questions about how much information search engine results should actually show.

These types of results may have first started appearing displaying maps and local business information for specific businesses. Google referred to this feature in their Web Master Help pages as a plus box (though the link is no longer live):

The address link shown below some sites in our search results (in an expandable area called a Plus Box) is meant to help searchers locate businesses and compare search results. We show the address link for results that are local in nature and for which we have an associated address. If we don’t have an address for your business, or we don’t think that an address is relevant to your site we won’t show it.

Continue reading Google Plus Box Patent Application

Followup on Google’s Historical Data Patent Application

One of the most talked about patent applications from Google over the past couple of years was one which looked at how time might be incorporated into a system of ranking documents, and how time might help the search engine recognize when people might be attempting to manipulate (spam) search results.

The patent application was published in March of 2005 – Information retrieval based on historical data

Over the past couple of months, Google has had some new patent applications published which share a good amount of the description of that original patent filing, but contain new and modified claims.

Calculating Search Rankings with User Web Traffic Data

Can Web traffic information help to improve the relevancy of search results?

Should a search engine learn about how to rank a page by watching searchers use other search engines?

Can information gathered from Internet Service Providers (ISPs) and Web proxies be used to construct a near real-time map of the Web that fold information about that traffic into a ranking system for those pages?

A new patent application from the Pisa-based research team at Ask.com explores these topics and a few more, and suggests ways to improve the freshness, coverage, ranking and clustering of search results through looking at user Web traffic data.

Why Look at Web Traffic Information?

There are three basic tasks that a search engine will normally perform. It will:

Continue reading Calculating Search Rankings with User Web Traffic Data

Recent Readings on SEO, Search, and Rankings

Search Engine Optimization with PHP
Web Dragons: Inside the Myths of Search Engine Technology
Google’s Pagerank and Beyond: The Science of Search Engine Rankings

There are a number of very good books and ebooks on search engine optimization and search, and I’m always on the lookout for more.

A couple of months back, Jaimie Sirovich mentioned on his blog, SEO Egghead, that his publisher was providing some review copies of his new book for people who might write about them, and Jaimie asked anyone who might be interested in reviewing a copy to contact him. I sent him a note, letting him know that I would be interested, and I’m glad that I did.

Mike Grehan also mentioned a couple of books in his Clickz articles over the past few months that sounded pretty interesting, and I ordered them based upon his referrals. Once again, I’m happy that I followed his suggestions for search related reading.

Search Engine Optimization with PHP
by Jaimie Sirovich and Cristian Darie

Continue reading Recent Readings on SEO, Search, and Rankings

Search Result Snippets and the Perception of Search Quality

While we often talk about relevance when it comes to writing webpages, but another aspect of content for a page that needs to be considered is how engaging and persuasive what we write might be. What makes one result appear to be more relevant, and more trustworthy than another?

A paper from researchers at A9 and Yahoo, Summary Attributes and Perceived Search Quality, shows some experimentation on how people might perceive search results based upon what a search engine displays from Web pages on its search results pages.

A list of search results will usually contain the title, URL, and an abstract or snippet from pages. If the words within the meta description contain terms from the search query, part or all of the meta description may be shown to searchers. If the page is returned as relevant for a term, and the words used aren’t in the meta description, other words from the page may be shown instead.

What is it that searchers are looking for that might cause them to choose one search result over another to click through and investigate further?

Continue reading Search Result Snippets and the Perception of Search Quality

Study Concludes Robots.txt Files Should be Replaced

One of my first stopping points when assessing whether there are any technical issues involving a Website is a text file in the root directory of a site with the name robots.txt.

Some sites don’t have a robots.txt site, and some don’t necessarily need one, but a dynamic site with endless loops that a search engine spider may get lost within should have a robots.txt file, and a disallow statement keeping the spidering programs from trying to index those pages.

A site that republishes the same content under different URLs, such as alternative print versions of pages, should also consider disallowing those pages.

An error in a robots.txt file can have some serious implications for the indexing of a web site.

Continue reading Study Concludes Robots.txt Files Should be Replaced

Stanford’s New PageRank Patent

Calculating PageRank takes a lot of time and computing power. If there was a way to speed up the process, it might make a significant difference to the amount of time that it takes to assign PageRank values to pages.

The original patents involving PageRank, Method for node ranking in a linked database (updated) and Method for scoring documents in a linked database, have been joined by a newly granted patent that explores a faster method of calculating ranks for pages.

On the Web, most links between pages are between pages in the same domain. That’s an observation that the process in this new patent uses to its advantage.

Methods for ranking nodes in large directed graphs
Invented by Sepandar D. Kamvar, Taher H. Haveliwala, Glen Jeh, and Gene Golub
Assigned to Board of Trustees of the LeLand Stanford Junior University
US Patent 7,216,123
Granted May 8, 2007
Filed: August 22, 2003

Continue reading Stanford’s New PageRank Patent

Google Reviews: Reputation + Quality + Snippets + Clustering

You’ve likely seen reviews of businesses in Google Maps, and Seller Ratings in Froogle for merchants. For a few products listed in Google Reviews, such as MP3 players, you will also see product reviews.

I’ve written previously about the Growing power of online reviews, and wrote a detailed breakdown of a Google patent application on how they may find and aggregate online reviews.

That patent application left a lot of questions unanswered about topics such as how reviews are ranked and valued. Google has had five new patent applications published which look like they might answer some of those questions, with some interesting insights.