If you were an advertising network, interested in presenting ads on a social network, what kind of advertising model might you come up with to offer the owners of that network, that might encourage people to advertise upon that site?
A recent patent application from Google explores using social profile information from members of a social network, and from their friends or contacts, to determine which advertisements to show viewers when they visit pages of that social network.
The patent filing describes a process that would look at information submitted by members about their interests as well as what kinds of groups they might be a member of, to determine what ads to show on their profile pages.
Over the past year, I’ve been keeping a close eye on some of the activities of nonprofits and environmental groups on the Web as they explore ways to share their message with people online. I’ve had a chance to work with a couple of nonprofits over the past few years, and enjoyed the experience.
I’ve also had an opportunity to talk with a few people involved in working with sharing the mission and message that their nonprofit organizations stand for, and their passion is contagious.
I’ve also really enjoyed working with small businesses, as they learn how to bring their offerings to the Web. It’s very fulfilling to help people realize their dreams, and bring helpful and useful products and services to the public.
I have been working as the Director of Search Marketing for KeyRelevance, and decided a few weeks back that I wanted to explore working more with environmental groups and nonprofits and small businesses, either on my own or within a nonprofit organization, and gave my notice.
I’ve written in the past about many of the reasons why you might find the same content at different pages on the Web, and some of the problems that duplicate content might present to search engines.
When someone performs a search on the Web, a search engine doesn’t want to show more than one page that contains the same or very similar content to that searcher. A search engine also doesn’t want to spend time and effort in crawling and indexing the same content on different sites.
One of the challenges that a search engine faces when it sees duplicate content is deciding which page (or image or video or audio content) to show to a searcher in search results. If a search engine provided a way for creators of content to find unauthorized uses of their content on the Web, it might take some of that burden off the search engine.
A newly published patent application from Google describes a process that could be provided for people to search for duplicate copies of their content on the Web, even if their content isn’t readily available online.
Not long ago, during a search at Google, a message at the top of the search results told me that my results were,
“Customized based on recent search activity.”
A link next to that message provided more information, telling me that if I signed into my Google Account, I might see “even more relevant, useful results,” based upon my “web history.”
During another recent search, a similar message appeared telling me that my results were based upon my location, with the results biased towards Philadelphia, which isn’t too far away.
I’ve been wondering since what it is that Google is considering when it makes changes to my search results like that. The major commercial search engines act as an index to the Web to many people who rely upon them when looking for information online.
Imagine an index that changes for every searcher.
If you search for news at Google News, you’ve probably noticed that you can view news articles by date or by relevance.
Many of the news articles that you find in Google News are from sources like wire services, where the information is shared amongst many newspapers. Reporters have the option of adding additional information, but often wire service articles at different papers contain little more than the original material, and may often contain less than the original.
So it’s possible that there may be many articles that are substantially the same, and if those are the most relevant result for a search at Google News, it’s likely that Google doesn’t want to show all of those article in their results.
How does Google decide which articles on the same subject to show in Google News, and how to rank those news articles?
While you can search at google.com just about anywhere in the world, you can also access Google at a number of different country specific addresses, such as google.co.uk, www.google.fr, www.google.co.in.
Chances are, if you search at one of the country specific Google address, the results you see may be biased towards pages associated with that country. But, when you search at Google.com, the search engine may also try to send you results that might be appropriate for the country you are located within, or a country that you prefer to see results from.
In an Official Google Blog post from July of this year, Technologies behind Google ranking, we were told that, “The same query typed in multiple countries may deserve completely different results.”
So, for example, a seach for the query [football] should provide different results in the US, the UK, and Australia, because the term refers to completely different sports.
Search pogosticking is when a searcher bounces back and forth between a search results page at a search engine for a particular query and the pages listed in those search results.
A search engine could keep track that kind of pogosticking activity in the data it collects in its log files or through a search toolbar, and use it to rerank the pages that show up in a search for that query.
A recent patent application from Yahoo describes information that a search engine may collect when searchers click on search results, and suggests that pogosticking information could be used with a ranking system like the one Yahoo described in a patent filing on User Sensitive PageRank, which I wrote about in Yahoo Replaces PageRank Assumptions with User Data.
The Yahoo patent filing on pogosticking is:
Search Pogosticking Benchmarks
Invented by Thomas A. Kehl and Jyri M. W. Kidwell
Assigned to Yahoo
US Patent Application 20080275882
Published November 6, 2008
Filed: May 2, 2007
Why do search engines care about spam pages that show up in search results? What does a search engine consider web spam? How can a search engine identify web spam?
Should someone who publishes information on the Web be concerned that a search engine might label their pages as spam?
Might the best way to avoid having a search engine avoid mislabel your web pages as search engine spam be to focus upon building quality content on the pages of your site?
It probably is, but it doesn’t hurt to look at what the search engines say on these topics, which is a good reason to keep up with patent filings and papers that are published by the search engines.
Good SEO and Bad SEO Techniques