One of the challenges of optimizing an e-commerce site that has lots of filtering and sorting options can be to try to create a click path through the site so that all the pages on the site that you want indexed by a search engine get crawled and indexed. This could require setting up the site so that some URLs are stopped from being crawled and indexed by use of the site’s robots.txt file, the use of parameter handling, with some pages having meta robots elements that are listed as being set as noindex.
If that kind of care isn’t performed on a site, a lot more URLs on the site might be crawled and indexed than there should be. I worked on one e-commerce site that offered around 3,000 products and category pages; and had around 40,000 pages indexed in Google that included versions of URLs from the site that included HTTP and HTTPS protocols, www and non-www subdomains, and many URLs that included sorting and filtering data parameters. After I reduced the site to a number of URLs that was closer to the number of products if offered, those pages ended up ranking better in search results.
I just returned from a few days in Las Vegas and the Pubcon Conference.
I had the chance to see some great presentations and talk to a number of interesting folks, and the company that I am the Director of Search Marketing at, Go Fish Digitalwon a US Search Award for Best Use of Search for Travel/Leisure, for a campaign we did for Reston Limo.
I wanted to share my presentation from the conference here as well.
A few years ago, I wrote the following about post about Google’s OneBox Patent Application I was brought back to it, with a new Google patent that looks at answering questions within similar answer boxes, and showing rich content, like in the example below:
A patent filed by Google a couple of years ago and granted today takes another look at Oneboxes, and includes this statement early on:
A search engine provider, Google Inc. of Mountain View, Calif., has developed an “answer box” technology, known as OneBox, that has been available for several years. Using this technology, a set of web search features are offered that provide a quick and easy way for a search engine to provide users with information that is relevant to, or that answers, their search query. For example, a search engine may respond to a search query regarding everyday essential information, reference tools, trip planning information, or other information by returning, as the first search result, information responsive to the search query, instead of providing a link and a snippet for each of a number of relevant web pages that may contain information.
A Google patent granted this week targets map spammers, who submit information about businesses to Google Maps, in a manner referred to as keyword stuffing.
The patent attempts to find words submitted by business owners as titles for businesses names that trigger a surprisingness value for combinations of words within a business title to determine whether a business listing is legitimate or fraudulent.
Traditionally, in Google Maps, the ranking signals used by business listings to include those businesses in search results depend upon their distance from a searcher, how prominent a business might be on the web, and how relevant the title for a business might be to the query used in a search to find the business.
When someone searches for a business. Google Maps may show off prominent businesses based on the searcher’s location. This patent targets people who might use that information to attract people to unrelated websites, by faking information in business listings. This patent targets people trying to take advantage of the use of well-known businesses located in a specific area:
Last year I wrote a post titled Google on Finding Entities: A Tale of Two Michael Jacksons. The post was about a Google patent that described how Google might tell different entities apart that shared the same name. The patent in it was filed in 2012 and granted in 2014. Google was also granted a new patent on disambiguating entities this week, which was originally filed in 2006. It is worth looking at this second one, given how important understanding entities is to Google.
It contains a pretty thoughtful approach to understanding and distinguishing between different entities within documents and queries.
The Web is filled with factual information, and Search on the web has been going through changes to try to take advantage of all of the data found there. Mainstream search engines, such as Google, Bing, and Yahoo, traditionally haven’t given us simple and short answers to our queries; instead showing us a list of Web pages (often historically referred to as 10 blue links) where that data might be found; and then forcing us to sort through that list to find an answer.
Google introduced providing direct answers to questions at the Google Blog in April 2005, in Just the Facts, Fast.
That may have been in response to Tim Berners-Lee writing about the Semantic Web back in 2001, where he alerted us to the possibilities that freeing data otherwise locked into documents might bring to us. By search engines finding ways to crawl the web collecting information about objects and data associated with them, we begin approaching the possibilities he mentioned. And we get answers that we otherwise couldn’t find as easily.
As the story tells us, Larry Page shut the program down after being concerned over how invasive it was. It would offer phone owners notices in Google Maps seconds after they entered a store that had electronic beacons set up in their store. After reading about the cancellation, I thought to share the patent so that you could learn what that was about. The patent is:
A patent granted to Google this week attempts to identify similarities between different types of entities, when it finds information about them on the Web. It refers to these types of similarities as commonalities, as in things they may have in common. Google may use these similarities in a number of ways, such as supplementing search results containing related information based upon results that might be in the same category or possibly located in the same region.
The things identified as common may be for things that are moderately unique, but not completely rare.
The patent say “entities,” but it seems to be focusing upon different businesses that might share some similarities. For example, they refer to a food critic writing about restaurants a few times and tell us that the things such a critic might write about different restaurants might be used to find similarities between those places.