Category Archives: Search Engine Optimization (SEO)

Search Engine Optimization tips and strategies and information, from SEO by the Sea, to help make web sites easier to find.

Calculating Search Rankings with User Web Traffic Data

Can Web traffic information help to improve the relevancy of search results?

Should a search engine learn about how to rank a page by watching searchers use other search engines?

Can information gathered from Internet Service Providers (ISPs) and Web proxies be used to construct a near real-time map of the Web that fold information about that traffic into a ranking system for those pages?

A new patent application from the Pisa-based research team at Ask.com explores these topics and a few more, and suggests ways to improve the freshness, coverage, ranking and clustering of search results through looking at user Web traffic data.

Why Look at Web Traffic Information?

There are three basic tasks that a search engine will normally perform. It will:

Continue reading Calculating Search Rankings with User Web Traffic Data

Recent Readings on SEO, Search, and Rankings

Search Engine Optimization with PHP
Web Dragons: Inside the Myths of Search Engine Technology
Google’s Pagerank and Beyond: The Science of Search Engine Rankings

There are a number of very good books and ebooks on search engine optimization and search, and I’m always on the lookout for more.

A couple of months back, Jaimie Sirovich mentioned on his blog, SEO Egghead, that his publisher was providing some review copies of his new book for people who might write about them, and Jaimie asked anyone who might be interested in reviewing a copy to contact him. I sent him a note, letting him know that I would be interested, and I’m glad that I did.

Mike Grehan also mentioned a couple of books in his Clickz articles over the past few months that sounded pretty interesting, and I ordered them based upon his referrals. Once again, I’m happy that I followed his suggestions for search related reading.

Search Engine Optimization with PHP
by Jaimie Sirovich and Cristian Darie

Continue reading Recent Readings on SEO, Search, and Rankings

Search Result Snippets and the Perception of Search Quality

While we often talk about relevance when it comes to writing webpages, but another aspect of content for a page that needs to be considered is how engaging and persuasive what we write might be. What makes one result appear to be more relevant, and more trustworthy than another?

A paper from researchers at A9 and Yahoo, Summary Attributes and Perceived Search Quality, shows some experimentation on how people might perceive search results based upon what a search engine displays from Web pages on its search results pages.

A list of search results will usually contain the title, URL, and an abstract or snippet from pages. If the words within the meta description contain terms from the search query, part or all of the meta description may be shown to searchers. If the page is returned as relevant for a term, and the words used aren’t in the meta description, other words from the page may be shown instead.

What is it that searchers are looking for that might cause them to choose one search result over another to click through and investigate further?

Continue reading Search Result Snippets and the Perception of Search Quality

Study Concludes Robots.txt Files Should be Replaced

One of my first stopping points when assessing whether there are any technical issues involving a Website is a text file in the root directory of a site with the name robots.txt.

Some sites don’t have a robots.txt site, and some don’t necessarily need one, but a dynamic site with endless loops that a search engine spider may get lost within should have a robots.txt file, and a disallow statement keeping the spidering programs from trying to index those pages.

A site that republishes the same content under different URLs, such as alternative print versions of pages, should also consider disallowing those pages.

An error in a robots.txt file can have some serious implications for the indexing of a web site.

Continue reading Study Concludes Robots.txt Files Should be Replaced

Stanford’s New PageRank Patent

Calculating PageRank takes a lot of time and computing power. If there was a way to speed up the process, it might make a significant difference to the amount of time that it takes to assign PageRank values to pages.

The original patents involving PageRank, Method for node ranking in a linked database (updated) and Method for scoring documents in a linked database, have been joined by a newly granted patent that explores a faster method of calculating ranks for pages.

On the Web, most links between pages are between pages in the same domain. That’s an observation that the process in this new patent uses to its advantage.

Methods for ranking nodes in large directed graphs
Invented by Sepandar D. Kamvar, Taher H. Haveliwala, Glen Jeh, and Gene Golub
Assigned to Board of Trustees of the LeLand Stanford Junior University
US Patent 7,216,123
Granted May 8, 2007
Filed: August 22, 2003

Continue reading Stanford’s New PageRank Patent

Google Reviews: Reputation + Quality + Snippets + Clustering

You’ve likely seen reviews of businesses in Google Maps, and Seller Ratings in Froogle for merchants. For a few products listed in Google Reviews, such as MP3 players, you will also see product reviews.

I’ve written previously about the Growing power of online reviews, and wrote a detailed breakdown of a Google patent application on how they may find and aggregate online reviews.

That patent application left a lot of questions unanswered about topics such as how reviews are ranked and valued. Google has had five new patent applications published which look like they might answer some of those questions, with some interesting insights.

Tagging Content On Webpages, Print, and Television, with Yahoo’s Y!Q

A recent advertising campaign from Pontiac told viewers that if they wanted to learn more about what Pontiac had to offer them, that they should go to Google and search for Pontiac.

For the first time that I’ve seen, a patent application has done the same thing:

For additional information on “Y!Q” elements, the reader is encouraged to submit “Y!Q” as a query term to a search engine

So, what’s the big deal about this, and why is it relevant to the discussion of a couple of patent applications from Yahoo? It’s a hint at what Yahoo has planned for the way that people can find more information online for sources that they see offline.

Imagine seeing commercials on TV or in print with a Yahoo Y!Q icon on them, with a keyword or two listed along side of it. Advertisers can “rent” those keywords to have them appear in a Yahoo Y!Q search.

Continue reading Tagging Content On Webpages, Print, and Television, with Yahoo’s Y!Q

Baseball, SEO, and Redirects: Throwing the Gyroball

The baseball season is almost upon us, and I’m really looking forward to the cry of “Playball” from the umpires. I also want to see Daisuke Matsuzaka, who joined the Red Sox this year, and his mythical gyroball. I’m also rooting for Josh Hamilton to turn his life around with the Cincinnati Reds.

A little over a year ago, Matt Cutts used a baseball example to talk about how search engines might handle something known as a 302 redirect.

When someone types in http://www.sfgiants.com into their browser address bar, they are taken to that page, and then redirected to another page with a much uglier address: http://sanfrancisco.giants.mlb.com/index.jsp?c_id=sf

That happens because the server that you visit, when you type in the sfgiants.com page, has an instruction in it to redirect visitors to the different address. There are a couple of different kinds of redirects – a temporary one, and a permanent one. The temporary one, which uses a server code of 302, is supposed to be an indication that the new address is only temporary. The permanent one uses a server code of 301.

Continue reading Baseball, SEO, and Redirects: Throwing the Gyroball