You go to a site that you’ve enjoyed and bookmarked sometime in the past but haven’t visited in a while, and it’s changed. The topics it discusses are different, or the writing style isn’t quite the same, or it suddenly has links within its content to commercial pages that it probably wouldn’t have linked to before, or all of those things. It also seems heavily focused upon more commercial terms and content. It’s changed, and now its pages now have the appearance of what many might call “doorway pages.”
Doorway pages have also been referred to by terms like gateway pages, entry pages, bridge pagers, portal pages, and their primary purpose is to attract visitors from search engines in order to send them to other places.
As a site owner, you don’t want Google to start identifying your pages as doorway pages. Google’s Webmaster Guidelines tell us to:
One question I’m sometimes asked by people is about whether or not they should choose a domain name that includes the name of their business or brand, or if they should use keywords within a domain name to make it easier for them to rank for those keywords in Google and the other search engines. I often explain that while it may help them ranking for the phrase chosen if they use a keyword domain (often referred to as an exact match domain, or emd), that I usually prefer domain names using a brand, and that the best domain names tend to be somewhat short, memorable, and easy to spell, with emphasis on the “memorable.”
I have seen a lot of discussion on the Web about keywords in domain names, and a number of people discussing their experiments with exact match domains, and how those may help a site to rank for terms used in the domain name. The following video was uploaded at the Google Webmaster Help Channel this past March, with the Head of Google’s Web Spam team, Matt Cutts answering the question, “How would you explain ‘The Power of Keyword Domains’ to someone looking to take a decision what kind of domain to go for?”
When search engines return web pages in search results in response to a query, most people assume that the pages being show are the ones that a search engine has decided are the “best” pages in response to their search terms. But what does the word “best” mean in that context? The search engines attempt to show pages that are both relevant to the query (and the intent of a searcher), and are popular.
The intuition is that if your query matches tens of thousands of documents, you would be happier looking at documents that many people thought to mention in their web pages, or that people who had important pages mentioned at least a few times.
Can patents be said to have family histories? If so, this post is going to introduce a barely known ancestor to one of the most written about search related patents on the Web, as well as a brand new grandchild to the patent.
The patent is Google’s Information retrieval based on historical data, which was filed in 2003, and granted in 2008. When it was published as a pending patent application in 2005, it created a pretty big stir amongst the forums and blogs of the search community.
The patent has two focuses which both take advantage of recording changes to a site over time. One is to help identify web spam, and the other is to help avoid stale documents being returned in response to a query. It raised questions between SEOs such as how important are the ages of domains and of links, as well as:
Does Google favor fresher sites over older sites, or older sites over fresher sites?
Even more, how does Google weigh the age of a website?
Are the search engines looking at whois data to see who owns websites, and if there has been a change of ownership?
If the content of a site changes, and the anchor text pointing to it remains the same even though it’s no longer relevant, will it still rank for the terms in the anchor text?
If you buy a website and make changes to it, will the PageRank for that site start to evaporate or expire?
How much does feedback from searchers impact the search results that we see at Bing or Google? How do those search engines process and respond to that feedback?
The links that Google and Bing present for searchers to provide feedback on search results are listed at the bottoms of the search results pages for each. If there was a link instead after each search result where someone could provide feedback, how much of an impact would that change have, and would the search engines be able to handle the feedback that they receive?
A patent granted to Microsoft this week describes how the search engine may automate processes for “dissatisfaction reports” that are manually submitted by searchers, and how the search engine may file its own disatisfaction reports in some instances. While some of the feedback that search engines receive may include web spam reports, they may also receive feedback that something is “broken” with the search engines, or that a URL that should be showing for a specific query isn’t, or that the results just weren’t helpful.
I hadn’t heard the term “Bounce Pad” being referred to websites before, but it’s useful knowing the language of search engines, and the things they might look for when crawling and indexing webpages, and serving results to searchers. Determining whether a site is a bounce pad involves an analysis about redirects appearing on the site, like in the image below from a Google patent granted this week:
One of the mysteries associated with Google’s search results is how it determines which pages to show when there are duplicate or substantially duplicated documents within its index. A search engine doesn’t want to show searchers a list of search results that contains substantially the same pages, so when it finds pages that are pretty close to being the same, it will create a “cluster” of those pages and choose a representative page to display.
That kind of duplication can happen for a number of reasons, such as someone copying content from another page (with or without permission or license to do so), the majority of the content on a page being a manufactor’s or publisher’s description, a content management system set up so that the same page gets published more than once at different URLs, content being republished on a mirror site or sites set up so that if there’s too much traffic to one of the sites that the others may handle overflow, and more.