The purpose behind SEO isn’t to outrank every other site on the Web for certain queries. The purpose behind SEO isn’t to draw large amounts of traffic to a web site.
Rather, the purpose behind SEO is to make it easier for people to find a site that they are interested in, that offers what they are looking for, and that meets some informational or transactional need that they might have.
Ranking number one in search results isn’t always the best place to be. Sometimes it’s better to rank number two, or even a little lower, especially if someone visits one or more of the sites above yours, and sees that those sites don’t deliver what you offer.
Case in point, a site that I’d been working on for years had been trading places between the number one and number two position in Google’s results with another site for a very relevant query term. When the site was at the number two position, it tended to get many more conversions from visitors than when it is at the number one position.
Both sites are very relevant for that specific query term. Both sites fulfilled visitors informational needs. But the other site didn’t actually provide services based upon that information, while the site I was working with did. Being number two seemed like a good place to be.
The Google Onebox is a search result that sometimes appears below sponsored advertisements and above organic search results when you perform a search at Google. An example is when you perform a search such as a city name and the word “weather”. Google also offers specialized Google OneBox for Enterprise results for customers who use the Google Search Appliance or Google Mini.
It’s possible for you to create OneBox results for your own website, using a feature that appears to have originated with Google Coop. Google has a fair amount of documentation on the use of subscribed links, though it appears that the discussion group about subscribed links from that page has been removed from Google Groups for violations of “Google’s Terms Of Service.”
When I’m looking for information on a topic, I’ll rarely stop at one search regardless of how good or poor the information I find on the topic might be.
I’ll look at some of the results that I receive from my search, and possibly change the words I use in my search based upon what I see in those search results. Sometimes I’ll ignore those results and try out other terms. I might add a word or two to better focus my search, or remove some words to better target what I’m looking for. I might use an advanced search operator, such as a minus sign immediately in front of a word, to try to filter out some results that aren’t relevant to what I’m trying to find.
A couple of researchers from the University of Washington have published a paper to be presented at The 18th ACM Conference on Information and Knowledge Management (CIKM 2009) in November 2009, that takes a close look at how people search on the Web, and how those searchers might reshape and rewrite the query terms they use when trying to find information on a subject.
If you’re a searcher, knowing some of these strategies might help you find information on topics that you might be having troubles finding. If you’re a site owner, having some knowledge about how people search might help you think about how people might find your pages through search engines.
Webmasters sometimes move web sites from one domain to another, change the URL structures pointing to their web pages, or rename those pages themselves.
Changing the URLs for pages isn’t something that should be done without a lot of thought, and without very good reasons. Especially if there are many links and references on the Web to the old URLs. See Cool URIs don’t change for a number of technical ideas on planning what to use for your URLs so that it’s less likely that you might need to change them.
Regardless, webmasters do sometimes change the URLs for pages found on the Web.
This can sometimes happen when the owner of a site decides to change its name, or to rebrand its products, or merges or acquires another site or business and wants to consolidate the web pages from the other site under one name. It can also happen when a blogger decides to change the permalink structure of their URLs. Sometimes product lines are renamed, and the sellers of those products want people looking for them to find the products under the new names. There are many other reasons why the URLs to pages change.
We’re not often given too much insight directly into how a search engine like Google might check on the quality of their search results, and the algorithms that achieve those results. When we are, it can be interesting to look at some of the processes that their researchers might use, the assumptions that they follow, and the conclusions that they find.
What kinds of experiments would you perform if you were from one of the major search engines, and you wanted to compare two different algorithms that provided similar quality search results? Or you wanted to learn more about how people use the search engine, and if small changes might impact that use?
A couple of recent papers from Google describe experiments that the search engine performed.
Search Task Time and Searcher Satisfaction
How well do search engines understand the linking structure of a web site? Do they have ways to organize and classify individual links and blocks of links that they see on the pages of a site?
Do they treat links and collections of links that they find on more than one page of a site differently than links and collections of links only on one page? If they find more than one group of links on a page that contain many of the same links, though at the top and bottom of the page, how might they treat those links?
I came across a patent filing from Microsoft from last summer that explored many of these topics, as well as others. It hadn’t drawn much attention, so I decided to take a closer look at it here.
Segmentation and Link Blocks
One of the things that I like to do for sites that I work upon is to create an SEO content inventory.
I find it helpful to have information all in one place about the content that might appear on different pages of a site, and it can be very useful as a planning tool. The idea isn’t new, and usability.gov has a nice description of why it can be helpful to conduct a content inventory on their pages from a design stance.
Jeffrey Veen also published a post a number of years ago about using a tool like this when he works on information architecture and design issues for clients, in Doing a Content Inventory (Or, A Mind-Numbingly Detailed Odyssey Through Your Web Site).
One of the differences between the approach that usability.gov and Jeffrey Veen use, and the one that I like to use is that I include more details involving search engine optimization. For instance, in my inventory, there’s a space for the “present” page title, meta description, and meta keywords, and “future” title, meta description, and meta keywords.
Can looking at how many times rare words appear in a search engines index give us an idea of the size of the database for that search engine?
About a week ago, I wrote about some of the most common English words in the indexes for Google, Yahoo, Bing, Ask, and Google Caffeine. I took a look at 50 words that are amongst the most frequently appearing words in English, and estimates from those search engines about the number of times that those words showed up.
Comparing the number of results between the different search engines for those common words really didn’t tell us anything about the relative sizes of the indexes for those search engines for a number of reasons.
One is that the number of results shown are rough estimates only. It’s also possible that the way that estimates are calculated from one search engine to another are very different. Some of the pages listed among those results are likely duplicate pages at different URLs, or may have contained misspellings of the words. Some of the words may be abbreviations or acronyms, as well (such as “it” being an abbreviation for information technology).