I like digging into some of the patents and papers that come from search engines and academics who study how search works.
But something else I find fascinating is how marketing fits into Search Engine Optimization (SEO), and how important it is to know about both to be successful in getting traffic to a site. Or should I say the right traffic – visits from the people who will find the pages of a web site interesting and engaging to them.
A lot of that crossover is getting an insight into the words that people will both use to find a site, and expect to see upon its pages. That doesn’t come out of doing some research on wordtracker or nichebot or the Overture keyword selection tool (no longer available).
Those can be nice tools to use, but some of the most important steps in finding meaningful words that people will search for come earlier, before you should even be looking at those sites.
Some great pictures of Google Japan over at Search Engine Journal from Loren Baker. Loren and John Scott met up at the Tokyo offices of the search giant, and it sounds like they had a pretty informative tour, discussing issues such as search privacy in Japan, the new addition of local search, and more.
Wonder if we’ll start seeing some of those Google vending machines make their way to this side of the world.
There’s been a lot of discussion on the web, and in the news over the past few weeks about Google’s operations in China.
The Chinese version of their site, Google.cn filters out content that the Chinese government doesn’t want included in search results. As noted in the Stanford Daily (link no longer available), the Chinese language version of Google.com is unfiltered.
An issue recently arose regarding whether or not Google had a business license to even operate Google.cn, though that problem seems to have now been resolved, with a license granted after Google made a deal with Ganji.com to use their license.
One of the more interesting sets of commentary on Google in China are the posts of economics Professor Gary Becker and Federal District Judge Richard Posner, from The Becker-Posner Blog.
Over at Threadwatch, Graywolf started a thread titled Are you Optimizing for Google Definitions? There are some insightful comments in the thread, and I recalled a Google patent application that covered the topic.
I looked around the web to see if there had been any discussion about the patent application, but couldn’t find any. The document is System and method for providing definitions, (US Patent Application 20040236739) invented by Craig Nevill-Manning, filed on June 27, 2003 and published on November 25, 2004.
The abstract for the application is pretty general, but the document is fairly detailed. Here’s the abstract:
A system and method for providing definitions is described. A phrase to be defined is received. One or more documents, which each contain at least one definition, are determined. The phrase is matched to at least one of the definitions. One or more definitions for the phrase are presented.
Determining how a term or phrase may be used in the context of a page can be helpful in deciding how relevant that page is in responding to a query from a searcher.
A patent application from Google was published this week which looks at possible ways of considering the context of those words, and describes a multiple stage process to determine relevancy and find results to a search.
larger image (new window)
The document is fairly complex, but some possible actions that can be taken during the different stages described are:
I’ve been patiently waiting for the chance to try out Measure Map. It’s an interesting looking analytics tool that hasn’t gotten much past the testing stage. I’ve seen a few blog posts over the past couple of months from people who have been using it, and enjoying it.
A new blog post (from Jeffrey Veen, of the Google Measure Map Team) explains why I might never get that invite, with measure map now part of Google. Congratulations to the folks at Adaptive Path, on their success in what was their “first initiative to develop products in-house.” It appears that Jeffrey Veen, and some of the other folks who worked on Measure Map will be leaving Adaptive Path to join Google.
How much information is included in the databases of the different search engines? How do these numbers strike you?
Google 53 Billion Pages
Yahoo! 8.4 Billion Pages
MSN 3.7 Billion Pages
Those are estimates from four researchers at the Stanford University Dept. of Computer Science, who have come up with a method of Estimating the Index Sizes of Search Engines (the article has been removed from the Stanford pages – see comments below for more details) based only upon information that could be gathered from the public interfaces of the search engines.
There are a number of questions about the results that they received, but they consider and include some discussion about potential errors that may throw off their numbers.
Google isn’t the biggest search engine that Anna Lynn Patterson has worked upon. That distinction probably falls to the Internet Archives, which she worked on before joining Google, and likely has a few billion more pages in its database than Google (the archive has 55 billion web pages right now).
In addition to that feat, Anna is the writer of a pretty good article on search engines, over at ACM Queue, titled Why Writing Your Own Search Engine is Hard.
The latest search engine description from Anna Patterson, published yesterday, involves a search engine immune from Google Bombing. It could be said to reward authors for well written html, and good punctuation. It can find relevant pages that don’t include the query terms on those pages, even though immune to Google Bombing. She also finds a way to perform personalization with the search engine, and detect and eliminate duplicates.
The search engine that she has conceived of can also be set to serve a mix of relevant pages from different topics in search results to searchers. For example, a search for “Blues” could easily be set to display pages on the first page of search results that lead to: