There are many web sites for nonprofit organization online that could use a little direction, a little help from people in the web design and internet marketing communities.
I came across a site this weekend that works to connect professionals interested in helping non profits with organizations that need their help.
The Taproot Foundation is a non profit that partners with corporations, universities and trade associations to help provide pro bono marketing, human resources and IT consulting to non profit organizations.
The term “pro bono” means “for the good,” and Taproot has been working to connect business professionals with non profits since 2001, enabling those professionals to provide a few hours a week to help organizations that can benefit from their experience and expertise.
Many of the Taproot projects involved creating or updating web sites for non profits. Here are some of the names of non profits that have been benefitting with Taproot, on projects involving basic or advanced web sites:
How much of a responsibility do search engines have to police the internet, and protect their users from email and instant message and web spam, phishing fraud, and misuse of chat rooms? How can search engines be socially responsible and work towards keeping consumers from harm?
How effective can they be in pro-actively identifying and fighting phishing attempts on the Web as they index the Web?
A newly published patent application from Yahoo tells us that people using the Web have an expectation that their services providers will help protect them from such abuses, and Yahoo describes how they might use robot programs to help identify emails, IMs, chat rooms, and Web sites where inappropriate activities take place.
The document goes into detail on how a Yahoo robot might interact with others in a chat room, and through email and instant messaging programs to identify spam and fraud, through phishing attacks aimed at obtaining passwords, credit card details, social security numbers and other confidential information.
When stockbrokers who spend their day searching for financial information about different businesses type the word “Starbucks” into Google’s search box, chances are that they are more likely to be looking for stock price information than the closest place that they can get a mint mocha chip frappuccino.
When a city-dwelling college student, who likes to meet up with friends at new places all the time, using his cell phone to find and map out those places, types the word “Starbucks” into his phone’s browser, the first thing he wants to see is probably a map to the nearest Starbucks.
Can a search engine be smart enough to serve a stock price quote at the top of search results to the stock broker, and a map to the college student, even if both are using hand held devices to connect to the Web?
Relevance and Informational Needs
As a webmaster, when you put a page up on the web, there may be parts of that page that you may not want to have indexed by a search engine.
Many web pages contain information that isn’t unique to each page, such as the navigation for a site, copyright notices, advertising, links to other sites such as blog rolls, and other sections that may not contain information about the main topic of the page itself.
Yahoo’s Robots-Noindex Classes
In May of 2007, Yahoo made a post on the Yahoo Search Blog about how webmasters could let the search engine know that content in certain sections of pages shouldn’t be returned in search results to searchers, titled Introducing Robots-Nocontent for Page Sections.
Yahoo was granted a patent this week which describes how anchor text in links may be used to increase the relevancy ranking of a page pointed to by that anchor text. The patent was originally filed in 2002, and it discusses how anchor text might work while naming the Altavista search engine as a possible place where the methods it describes might be implemented. Yahoo acquired the company that owned Altavista, and the technology is theirs.
While the patent is fairly old, it provides some details about how anchor text might be used by a search engine in a search index that may not be widely known.
It’s fairly common knowledge that the major commercial search engines pay attention to the anchor text in links pointing to pages, and may consider a page to be even more relevant for a query term if the term not only appears on a page, but also appears in the linked anchor text pointing to a page. Some pages may even be determined to be relevant for words that they don’t contain, but which show up in links to those pages.
Search engine optimization is an ever growing and ever changing field, and as search engines and the Web change, so does SEO.
There are no classrooms, nor college courses, no single one site or conference series or book that can help you keep up with those changes.
Paying attention to a lot of blogs, news reports, press releases, and other sources of information can help provide some insights about changes in SEO, and discussions at forums and conferences and social sites can present a lot of signals and noise about what might be new in search. It’s not always easy, and not always even possible to distinquish between the signals and the noise sometimes.
I look at a lot of patent filings and papers from the search engines here because they can provide views of how search engines may work from the perspective of the search engines. I consider them primary sources because they come directly from the search engines, but even those sources often only provide glimpses of possibilities rather than actual insights into how search engines function.
Perhaps the best value that may be taken from search engine patent filings isn’t so much the processes that they describe, but rather the hints of assumptions behind some of the methods and systems that they present.
Have you ever searched at a search engine and received results that weren’t very good matches?
You may have searched again after changing the query terms that you used, or you may have given up on the search.
For example, you perform a search, such as “pizza in Elkton, Maryland,” and you don’t receive any actual matches on “pizza” in the locality of “Elkton.” It’s possible that there may be “pizza” results in nearby areas.
Should a search engine display those results from the nearby area, even though they weren’t quite what you were looking for?
How would a search engine go about expanding your original query, to return results to you that might be relevant, but that don’t match the words that you used when searching?