A thoughful and intelligent article from Shannon Watters at Digital Web this week, How to Choose an eCommerce Package, offers some great suggestions on what to look for when choosing software for an online shop offering goods or services or both.
Shannon writes about what she calls the “top eleven things to consider when choosing an eCommerce package,” and it’s difficult to argue with her selections, but I was hoping for an even dozen things to consider – with an addition of how the ecommerce software might interact with search engines.
Of course, an ecommerce system should be easy to use for both shopper and site owner. Updating the software, and adding functionality from third party toolmakers should be a breeze. The software should be able to scale with growth, and it should be easy to use with an analytics package, so that you can measure your traffic and see how visitors use the site.
Shannon’s suggestions regarding promotions and discounts and the ability to offer customer service are spot on. Security is essential, and an intuitive checkout process will be a major determinent as to whether visitors become customers. Her opinions on open source options, and on the community and company behind a software are filled with thoughtful suggestions.
Continue reading When Choosing an eCommerce System, Remember the Search Engines
When you type a domain name into your browser address bar and the domain isn’t found, sometimes you’ll be served a search results page that has advertisements and links related to a “subject” for that domain name.
For example, you might type “usedrugs.com” into the address bar, and there may not be a website at the domain name “usedrugs.com”. You may be redirected to a third-party website, with advertisements and/or links relevant to that domain name. Ads might be shown for the phrase “used rugs” on that web page, if it is determined to be the most likely segmented version of the string of text from the domain name.
Some sites might be filtered from appearing in search results because the domain names may seem to potentially indicate adult related material.
For example, a domain name, such as “mikesexpress.com”, could be filtered out of search results by an adult filter, because the word “sex” appears in the string of characters.
Continue reading Improving Text Segmentation for Displaying Advertisements and Filtering Search Results
When you perform searches in Google, sometimes you will see within the listed search results one which has a plus sign next to it. If you click upon the plus sign, you are shown some more information. A new patent application published at the USPTO website provides some information about how expanded and collapsed data in search results might work.
It also raises some questions about how much information search engine results should actually show.
These types of results may have first started appearing displaying maps and local business information for specific businesses. Google referred to this feature in their Web Master Help pages as a plus box (though the link is no longer live):
The address link shown below some sites in our search results (in an expandable area called a Plus Box) is meant to help searchers locate businesses and compare search results. We show the address link for results that are local in nature and for which we have an associated address. If we don’t have an address for your business, or we don’t think that an address is relevant to your site we won’t show it.
Continue reading Google Plus Box Patent Application
One of the most talked about patent applications from Google over the past couple of years was one which looked at how time might be incorporated into a system of ranking documents, and how time might help the search engine recognize when people might be attempting to manipulate (spam) search results.
The patent application was published in March of 2005 – Information retrieval based on historical data
Over the past couple of months, Google has had some new patent applications published which share a good amount of the description of that original patent filing, but contain new and modified claims.
Can Web traffic information help to improve the relevancy of search results?
Should a search engine learn about how to rank a page by watching searchers use other search engines?
Can information gathered from Internet Service Providers (ISPs) and Web proxies be used to construct a near real-time map of the Web that fold information about that traffic into a ranking system for those pages?
A new patent application from the Pisa-based research team at Ask.com explores these topics and a few more, and suggests ways to improve the freshness, coverage, ranking and clustering of search results through looking at user Web traffic data.
Why Look at Web Traffic Information?
There are three basic tasks that a search engine will normally perform. It will:
Continue reading Calculating Search Rankings with User Web Traffic Data
Search Engine Optimization with PHP
Web Dragons: Inside the Myths of Search Engine Technology
Google’s Pagerank and Beyond: The Science of Search Engine Rankings
There are a number of very good books and ebooks on search engine optimization and search, and I’m always on the lookout for more.
A couple of months back, Jaimie Sirovich mentioned on his blog, SEO Egghead, that his publisher was providing some review copies of his new book for people who might write about them, and Jaimie asked anyone who might be interested in reviewing a copy to contact him. I sent him a note, letting him know that I would be interested, and I’m glad that I did.
Mike Grehan also mentioned a couple of books in his Clickz articles over the past few months that sounded pretty interesting, and I ordered them based upon his referrals. Once again, I’m happy that I followed his suggestions for search related reading.
Search Engine Optimization with PHP
by Jaimie Sirovich and Cristian Darie
Continue reading Recent Readings on SEO, Search, and Rankings
While we often talk about relevance when it comes to writing webpages, but another aspect of content for a page that needs to be considered is how engaging and persuasive what we write might be. What makes one result appear to be more relevant, and more trustworthy than another?
A paper from researchers at A9 and Yahoo, Summary Attributes and Perceived Search Quality, shows some experimentation on how people might perceive search results based upon what a search engine displays from Web pages on its search results pages.
A list of search results will usually contain the title, URL, and an abstract or snippet from pages. If the words within the meta description contain terms from the search query, part or all of the meta description may be shown to searchers. If the page is returned as relevant for a term, and the words used aren’t in the meta description, other words from the page may be shown instead.
What is it that searchers are looking for that might cause them to choose one search result over another to click through and investigate further?
Continue reading Search Result Snippets and the Perception of Search Quality
One of my first stopping points when assessing whether there are any technical issues involving a Website is a text file in the root directory of a site with the name robots.txt.
Some sites don’t have a robots.txt site, and some don’t necessarily need one, but a dynamic site with endless loops that a search engine spider may get lost within should have a robots.txt file, and a disallow statement keeping the spidering programs from trying to index those pages.
A site that republishes the same content under different URLs, such as alternative print versions of pages, should also consider disallowing those pages.
An error in a robots.txt file can have some serious implications for the indexing of a web site.
Continue reading Study Concludes Robots.txt Files Should be Replaced